Friday, September 30, 2011

Applied Stock Market Stochastics 2 (applications to options)

I wrote earlier in Applied Stock Market Stochastics about learning using the mean and the standard deviation of stock market data to calculate the probability distributions of where a stock might go in the future. I've decided to take what I learned from the previous post and take it a step further in application to Option derivatives.

Understanding options is a little math heavy; option pricing comes from the black-scholes equation pricing equation by assuming that the price of an equity moves in a Geometric Brownian motion like way with a constant drift and volatility. We can model this using the mean and standard deviation by throwing the parameters into a Gaussian function. The black-scholes equation is basically a closed form solution to the differential equation representing this condition.

I got tired of simply just looking at the equation so I decided that it would be more interesting to create a statistical model to determine the motion of the price of a stock. In my last post in Applied Stock Market Stochastics, I analyzed the historical prices of the Dow Jones index as a distribution of percentage change on a month by month basis. The result showed that the motion of the stock market moves around in a Gaussian like manner.

I will assume that you have a basic understanding of put and call options. If you don't know about them, the short story is that the purchase of one of these contracts is a bet that the price of some equity will be above or below some price. If it is above or below that price, your profit is proportional to that difference (and if you need to learn more, hit the links or Google for more details).

With that in mind, the question that I want to solve is the likelyhood of making a profit or loss and by how much. If we understand the probabilities of making a profit or loss we can then calculate for the "expected value" of the trade-- which is the average loss or profit that we expect to get if we made the same bet over and over again. If the expected value is positive, we expect to make money and vice versa.

Now, I am not aiming for the level of sophisticated analysis hedge funds do out there and I won't pretend that I know that much either, but I do have a great fascination of learning about things though a first principles approach; and I think this is a great first principles approach to tackling this problem!

So let's first model the motion of an equity instead of a stock market index. In this case, I've downloaded Google's historical prices for the last 3 months (between June and September) and calculated the mean and standard deviation of it's % chance in price on a daily basis. The numbers I got from this calculation were 0.0373% and 2.409 for the mean and standard deviation in percents. For actual use in the models, I divided these numbers by 100 and ran a simulation using 100,000 simulations to simulate the resulting price of the Google stock by October 20, 2011 which is the expiry date for a set of call and put options. The resulting plot looks like the one below:

The change factor is the multiplier which you would multiply the stock price to determine it's final value. The plot here, is also normalized (meaning calculating the area of this graph will equal 1 or 100%).

Now for illustrative purposes, I took an option at random, which was the $555 strike call option dated for Oct 20th 2011, which was priced at about $12.00 when I checked at Google Finance and plotted out the returns as a function of price:

$555 strike, call option Profit/Loss profile @ a price of $12.00

So now we have now have 2 pieces of information: the probability distribution of Google's stock changing by some factor by the option expiry date and the Profit/Loss profile of the stock option. Now the most important question to answer is, given the current price of Google's stock at $537 (when I checked today), what is the weighted profit/loss as a function of price? The answer to that is to take the probability distribution of final prices and multiply it by the return distribution of the option to get the following plot:

What this graph tells us is what you expect to earn/lose over all the possible outcomes. Summing the values together will yield the expected value of this trade, or the amount of money you expect to make when making this bet over and over again. I've done that calculation and the number resulted in -$3.023/per share (and 1 contract is a multiple of 100 shares) meaning that you expect to lose $302.3 on average when making this trade.

But!! There is an upside to this however, in that you can instead sell a call contract instead of buying one and expect to make $302.3 instead and turning this into a profit.

Caveats

Obviously, if this were the case then everyone would already be doing this to make money in the stock market (and to some extent, there are people and organizations out there that are already doing this). So there are 2 important assumptions that you needs to be aware of, and those are:
  1. that the motion of price can be modeled by a random Gaussian variable and
  2. the mean and standard deviations are accurate and do not change
But there will be times when Google's stock price won't behave in the random manner that we modeled it as and we don't event know that numbers that were put into the model were right in the first place! This is the risk we take when using this calculation (and any other model for weighted profits and losses!).

So what is this good for?

This is a problem that plagues historical analysis when trying to make predictions of the future. But the most important take away from this is developing an understanding of the likely and unlikely future outcomes to help in making better decisions; in the long run, this is the most important part.

Now, even with these points of uncertainty, this could be used as a model outcomes based on your research and how you feel about it. The great power bestowed from this is being able to quantify your opinions of the future and risks involved by turning them into an expected value. This helps considerably in decision making instead of simply just going off on "a hunch."

There is obviously more work in modeling random variables and outcomes but I hope that you find this a useful tool in thinking about the future movement of a stock. Ideas and comments are most welcome.

Thursday, September 29, 2011

Load Speeds are Important

I bought some hosting space on iPage in February of 2011, the service was cheap, space was unlimited and I guess I got what I paid for when it came to speed; fairly slow load times. I wasn't happy enough with the load times to move the contents of my blog over to the site and domain that I've been playing with. But I'll make the appropriate announcement when I finally find a decent hosting solution.

I have been toying with getting onto Amazon AWS and setting up my own server and having my own space. The great thing is that I would have some persistent computing power available to me to run some automated applications. I really think that there are many fun things that one can do with having a server, without having the need to own the hardware to do things.

I am somewhat reluctant at using my main computing box at home as a server since I want something that I could completely crash with impunity but I am not really that interested in having a second full blown desktop computer at home. One option that I've been contemplating is using a MacMini as a server due to its really small foot print, the price is reasonable too at about $700 for a system with 4GB of memory.

Interestingly the same system sells for about $100 more when buying through the Japanese Mac Store. I think the exchange rate is playing with the price significantly as the Yen has sky rocketed (time for the arbitration??).

Time to do some research, but with some effort, it is possible to setup a server on AWS for about $110/year and have some fairly decent speeds. Storage goes for about $0.10/month meaning that I could have 4-5 Gigs of website data easily and still be only paying about $0.50/month in storage. Seems really reasonable to me.

All I would need to test is latency and bandwidth. If things look good, I'll probably make the switch.

Wednesday, September 28, 2011

Distractions are Productivity Killers

My work setup currently has me working on a large rectangular open desk suited to 2xN people, where N is determined by the length of the desk. The office consists of a large room with M tables where everyone works at. The problem with this setup is that the office can get really noisy and I have a hell of a time concentrating when there are that many people around, all conversing about a wide range of different topics and phone going off every 15~20 minutes.

In an environment like that, memory and concentration intensive work is nearly impossible. To be honest, some of the best progress I've made with code was done while working the night shifts where no one was around and I was able to just zone out and simply focus on what was going on with the code. While programming, I really need to be able to keep track of variable names, the structure of the program and algorithms in my head. The instant my concentration is broken, it takes time to reorient myself to figure out what I was doing again. This is actually a classic coding problem, especially when looking at code that one wrote days earlier.

To me I am not a huge fan of large open offices. There are some people out there that are able to focus in such environments, however, I am just not one of those people. I'd much rather have headphones on and be dead to the world while getting work done.

I am quite sure that offices around the world are setup in some manner like this, but I think it is rather counter productive. With laptops being used a primary computing devices, I really don't mind having a main desk somewhere, but I'd be much happier to have access to a silent stall where I can allocate blocks of 2-3 hours to get some serious work done and return to my desk to get less mentally intensive work done and be accessible to people. I am sure that there has been plenty of research done in this field and I am curious at how much of this research is implemented in offices around the world.

I would venture to bet that an architect that is able to create highly productive environments would be paid big bucks!

Tuesday, September 27, 2011

All You Need is the Right Trigger to End Microsoft

I started using Ubuntu Linux about 3 years ago and currently going on to my 4th year using the operating system. There were some hiccups at first when switching over, but now looking back, I have found the process to been a worth while one.

The greatest benefits to me about using a Linux system is the ease of reinstalling software and getting original settings back after a fresh install. I still remember times when working on a Windows computer where I would reinstall the OS once or twice a year to freshen things up and spend 2~3 days getting the system back up again with all the software that I use and configuring everything to my liking. What a pain that was.

The difference with working on Linux and having a home directory where all one's settings are stored in (somewhat) hidden folders, is that as soon as program is reinstalled, the home folder is the first place the program looks for to get the setting information. So long as you backed up the home directory, the instant you reinstall software, it is automatically configured. The other great thing is that software can be batch installed from repositories, and instead of having to deal with installation dialogues, you just tell the computer to install the software and it automatically installs them to a binaries folder and things are automatically setup. If you have multiple users, everyone's home directory holds each user's personal settings so there is no clashing whatsoever. Everything is simple to install and fast to restore.

The other great thing is having free software and having a repository to download whatever you need from. Just download the software, give it a try and if it works great then keep it. If not then uninstall it and try something else and most free software out there is "good enough" for all practical purposes.

There have been times that using free software didn't work out for me because of some minor formatting or incompatibility issues. For example, while open office can open and read MS Office files, there are times when the formatting of documents get a little off. I had the unfortunate problem of trying to write and update resumes in open office to find that the alignment of text was just off enough to make the document look a little unprofessional. As a result of that, I've become a little bit of a fan of PDFs, once you convert the document into a PDF, it should display the same across all systems.

If worse comes to worse, with the advent of OS virtualization, I am now able to run Windows inside of Linux or Linux inside Windows or what have you. I think this is a great boon to trying out new OSes and also getting access to just that one application that you can't find on your native OS. Rebooting is such a pain as I generally don't like needing to close all my applications and then open them all again after restarting the computer. Right now I use Win XP in side of Ubuntu just for MS Office on the rare occasions to edit documents, but other then that I don't use Windows for much else while at home.

To be honest, I don't see any real argument for the need to be dependent on a single operating system anymore. Most of the best software are multi-platform now and using them is just a matter of installing them onto whatever OS that you're on. Even writing multi-platform applications is not terribly hard using languages like Java or even Python since just-in-time compiling is sufficient to create native code for the applications to run.

So there really is no real need to be totally dependent on a single operating system anymore for any reason other than for the comfort of a familiar interface. To be honest, I think that Microsoft is walking on hollow ground; the instant the ground gives, I think it will be all over for them.

Monday, September 26, 2011

Is Facebook killing blogs?

One thing I've noticed recently is that I am far more spontaneous when posting pictures and updates to Facebook as opposed to writing on the blog. Uploading pictures to Facebook is dirt easy, albums are automatically created and linked to my profile and it is quite easy to make short posts while on the go. That and compared to the blogging medium, one may have to upload pictures to a separate site, then add comments and then link back to the blog. It's the multi-step work that is a certain killjoy to me.

I've tried blogging while on the go (on places like the train) but found that doing things like looking up multiple webpages, adding links and doing basic editing (like moving a few paragraphs around) to be quite cumbersome compared to writing short status updates.

Which brings me to the point that I believe that traditional blogs are dying out compared to posing things on the fly as smart phones become the norm. The vast majority of my friends have since stopped updating their blogs and mainly post news only on Facebook, which has been the trend for quite some time.

What will happen to blogs in the future, I am not entirely sure. Perhaps one day there might be a big enough backlash against Facebook that people may return to realm of blogging. Who really knows. I for one will continue to write, and actually make a point of spending more time posting to the blog compared to writing on Facebook. Perhaps the next best thing is stronger integration into (out of?) Facebook through blogs.

To be honest, there isn't a whole lot of original content on Facebook anyways and much of my best thinking happens while doing "real" writing.

Sunday, September 25, 2011

Applied Stock maket Stochastics

A stochastic variable is pretty much a non-deterministic variable, in other words a random variable. I've been thinking about stock market in terms of statistics to better quantify risks and expected values because it really boils down to understanding the risk-reward profile and creating a portfolio with the desired characteristics given the available financial vehicles.

The challenge is then quantifying the risks and rewards of which is a black art. I doubt that anyone has a perfect system of setting these values a priori because risk and reward are particularly fuzzy numbers until after the fact.

Given the current fragility of the world markets, one occurring theme that I've been contemplating is the mechanics of hedging. There are of course several ways of not losing money before a market downturn may include:
  1. Selling off equities
  2. Buying bonds
  3. Shorting stocks, selling calls or buying puts
  4. Buying short ETFs
The list is not extensive but those are some of the options that one would have during market uncertainty. The next question becomes, what do you think will happen in the next time frame (which can include spans of days, months or years). The idea will be to track current and future issues and setup a portfolio that tracks your sentiment based on the data collected and your interpretation of them and make bets on sentiments that you feel the most confident about.

Suppose that you don't believe that the stock market is quite volatile but not in a position to make huge gains. If you have securities already and already made significant gains but don't want to incur capital gains taxes, selling your securities might not be the best course of action. In this case it might be wise to lock in gains by hedging against future drops.

This begs 2 questions:
  1. What is the probability of the stock dropping n%?
  2. Give the probability of a drop, how much should you pay for insurance?
Now obviously without part 1 of the answer, answering part 2 is going to be hard. So I've started first looking at part 1 of the problem. What is the probability of the stock market moving in some direction in some time frame?

The most obvious method that came to mind was to first get the probability distribution function of a stock market for the range of all monthly percent changes (I could have done daily, but I already crunched the monthly data), The graph looks like the one below:
The above plot contains all monthly data from 1928. The plot looks fairly Gaussian yielding a mean of 0.764% and a standard deviation of 4.445.

The next step that one takes with this data is determining where things may go in the next few months. The easiest method to calculating what may happen in the next 6 months is to create a random Gaussian variable with the mean and variance of the Dow Jones index and multiply them together to determine the probability of what may happen in the next 6 months. The result is the following graph:

Given the cdf of the next 6 month, the next question to answer is what is the probability of seeing the Dow increasing or decreasing by some factor? This requires some integration but I've done some sample calculations and those results are provided below:


Based on the information below, we have a general idea on the probability of seeing growth in the Dow based on all historical data within the next 6 months.  What would be more important is to find means and standard deviations that represents the market right now; which could be done through calculating the distributions of %changes in price over a more recent time fame, or more importantly, values that they might expect in some future time frame based on current research.

The idea will be to create a portfolio that represents a personal bias on individual stocks and the stock market in general to create an adequately hedged portfolio against catastrophic crashes while exposing one's to certain amount of gains to mitigate risks while trying to aim for the most optimal profits. Modeling the motions of the stock market in general are important tools to help create a portfolio that matches one's risk-reward profile.

This is still a work in progress with many imperfections but it is a start to creating portfolios with explicit characteristics.