My latest article for Seeking Alpha was just posted – titled eMagin Is Still Just Potential. This covers a company I followed for a long time -eMagin. They make high quality micro-displays, and have been trying to crack the virtual reality market for a long time. I have been waiting / hoping for them to find success, but so far it has been elusive. Recent developments have knocked the stock price down below $1, at a time where it would appear it is starting to perform. It’s an interesting story (I think..).
It’s time for my third annual ‘Best Reads’ of the year post. Again this year, this is not only my best reads, it’s all my reads. So your first thought likely is, really? 5 books in a year? And that surprised me too.. but I don’t really carve out a specific time to read, and I spend a lot of my free ‘relaxing’ time either writing or programming. Plus.. the Peter the Great book was a big book with very few pictures.. Anyway, without further ado – here are my quick opinions of the books I read this year, in preference order:
1. Peter the Great: His Life and World
My favorite book I read this year. I knew nothing of Peter the Great and very little about European history circa 1700, and only decided to read this book because I enjoyed another book written by Robert Massie (Dreadnought). What I most appreciate about Massie’s writing is he provides great summaries of all the related political landscape and important people throughout the book, providing a great feel for the layout of Northern Europe during Peter’s time. He also has a way of giving you a feel for the subject of the book, a real persective on the quirkyness and intellect of Peter the Great. If you have enjoyed any other of Massie’s books, I would definitely recommend this one. And if you enjoy history, and have not read a book by Robert Massie, you owe yourself to try one out.
2. The Good Rain: Across Time & Terrain in the Pacific Northwest
I enjoyed this – it was like a collection of love letters to the Pacific Northwest. The author obviously has a fondness for the Pacific Northwest, and has collected several short stories around various cultural aspects. This was written in the early 1990s, so in some sense it is history, though since I lived thru this period and was working in Seattle at the time it brought back some interesting memories (Lesser Seattle anybody?). He did a great job bringing to live some semi-obscure Pacific Northwest personalities of the time, bringing on a feeling of nostalgia for Seattle before the big Tech boom.
3. When Paris Went Dark: The City of Light Under German Occupation
Since I was planning a trip to Paris this year, I read this prior to going to Paris. Since I have already visited Paris a few times (and it is one of my favorite cities), I am pretty familiar with the layout and neighborhoods of Paris. This provided me an interesting time dimension from this book to know the areas of Paris in a different time. It makes you think about what it would be like to live under the oppression of a victor, something I hope to never experience. Also an interesting recap of what happened after the war, and the division of the pro and anti-Nazi sympathizers after the Nazis left. Definitely worth a read if you are interested in WWII history, and either are planning to visit Paris or have recently visited.
4. Hillbilly Elegy: A Memoir of a Family and Culture in Crisis
I read this book with the hope of getting a good feel for mood, problems and culture in middle America, which I assume to be pretty distant from the relatively liberal Pacific NW. It was an interesting and easy read, but I don’t feel like it gave me the insight I was looking a far. It felt more like a limited picture of a hard luck story of one guy, who made it out of a tough childhood situation. I think my goal would of been better accomplished if I found a book that interviewed many different people with different perspectives. But an interesting story none the less.
5. The Wizard of Menlo Park
This is a biography of Thomas Edison covering hist life and works, and I am not sure why this book didn’t resonate with me. It was an interest period of time about an interesting guy, but I just didn’t get a good feel the person or the period. I felt the author was kind of negative on Edison, which surprised me a little but I don’t think that impacted my interest. I think it just seemed like a story told from 30 feet away from the subject, so it wasn’t as immersive as other biographies I have read.
For 2019, Probably another book or two on European history, and probably some more American history subjects. But when you only read 5 books a year, my challenge is to read history faster than time creates it.
After 9 years I have decided to replace the old Puget Investor / Invest.vfsystems.net VFS Rating. The VFS Rating uses various fundamental stock metrics to try to determine if the the stock is under or overvalued. It has never been as accurate as I had hoped, for a variety of reasons.
As of 9/30/18, here are final results of the VFS Rating:
Note that over the life of the VFS Rating, on stocks rated 10 would on average outperform stocks rated 1 by .09% a month (.01x above represents the monthly premium, so on average a vfs 10 rating would equate to .10% a month return, where a 1 rated stock would return .01%). There is high variability in these numbers, so I think this might be overstating the accuracy of the rating, but at least it is a positive correlation.
Note that on average the stocks in the portfolio outperformed the market by .12% a month, which annualized is over 1% a year over the S&P 500, which I like. However, that number cannot be credited to the VFS Rating, that number is more a function of the stocks I chose to add to the portfolio. I would like to think the over 1% number is due to my brilliance in picking stocks, but I wonder what that number would look like had I not included Amazon in my portfolio during that time.
I am now taking what I have learned from the calculation of the VFS Rating, and using some more modern programming techniques to improve it. Instead of a VFS Rating, I hope to come up with a target valuation of a stock, from which I can discern the discount or premium the stock market has applied to the stock based on fundamental indicators. This is very tough, because the valuation of a stock (in my current opinion) is primarily based on the growth prospects over the next 10 years – which is a very subjective (and inaccurate) forecast. But I have some ideas, and if I set up my measurements right I should be able to see if it can be done successfully.
In the meantime, I plan to rely on my technical indicator that is on the invest.vfsystems.net site. This indicator has shown some promising early results. Since November 2017, the performance of this indicator can be described as Y=.077x + .294. This formula is a little convoluted, but my technical score goes from a -10 to a 10, so a stock with a score of 10 out performs a stock with a score of -10 by approximately 1.5% a month. Again, this is a small sample set (13 months), but it is promising.
Finally, to repeat what I have told my friends – if you do plan to invest in individual stocks, the most important thing is to get your measurement systems set up. There are so many wrong strategies out there its important to measure your strategy and make sure you are beating whatever index you would invest in as an alternative. I have a number of measurements set up, so hopefully now as I embark on this new formula, it will show positive results. If not.. it could be back to the drawing board once again.
One of the biggest question marks in investing over the last 10 years will be what happens to retail in the online age. I have been following Norstrom’s off and on over the last 20 years as it is a high quality retailer with dedicated family management. However, in the last few years I have been out of Nordstrom’s as I have been avoiding retail all together until more clarity as to the future of retail is visible to me.
One of the statistics that most impressed me about Nordstroms was it was growing it’s web presence at something like 20% a year for the last few years. It seemed to be one of the few retailers that has a good online strategy, and is keeping itself Amazon-proof.
However, that may be a curse in disguise. As per this article on Business insider, this expanding retail presence is causing a problem, with digital sales now accounting for 30% of Nordstrom’s sales.
So they may be successfully fighting off Amazon (for now) – but it is clear they are cannibalizing their in store sales. Online sales grew 20%, while overall sales grew around 3%.
Is this bad? In Nordstrom’s case, I am now changing my mind, and I think this is a sign of trouble. In my opinion, Nordstrom has differentiated itself by having premium stores, and excellent customer service. Translating this differentiation to the web is a whole new challenge, other than the oft mentioned ease of returning things to Nordstrom. Competing for a premium experience on the web is going to be difficult, especially with Amazon and other competitors (i.e. upstart Stitchfix) building a model around personalization.
My prediction is that Nordstrom’s will have to reduce their average store size, as it probably no longer makes sense to have the same level of inventory in store when people are increasingly buying online. If this is the case, I think the cost of shrinking and retrofitting stores will exceed the (assumedly) reduced rent cost for the next few years, all the while spending to fight off online competitors.
So while I think Nordstrom’s is still a great brand and premium shopping experience, I am staying away. I think the moves the company needs to make in the future may further cloud how brick and mortar retailing needs to change in order to survive in an online world.
I have been spending a lot of time lately analyzing regression and related factors. As my stock market model creeps toward the realm of artificial intelligence, making decisions based on related data sets, the biggest decision as a programmer / data scientist is, how do I know a coorelation between data is predictive vs. just a coincidence?
So if you are reading this post to try to find the answer to that question, you can stop reading now, because I have nothing for you. Right now I am just using my gut (Feeding a computer datasets is probably a lot like parenting – only tell your children the things they need to know, don’t confuse them with too much data as they build their moral compass).
Anyway, as I think about all these issues, every time I think I find a meaningful correlation, I think back to the Super Bowl Indicator. Are you looking for a sure-fire predictor of stock market performance as measured by the S&P 500, with a track record of being correct 80% of the time in last 50 years?
Then maybe your strategy should be if an AFC team wins the Super Bowl, go all in. If the NFC wins, take all your savings out of the market. Of course I am not serious, but it is curious that this is one of the best predictors out there. Since the year 2000, it has a 66% accuracy
I bring this up because this is a great example of the problem faced by software developers/data scientists such as myself. As artificial intelligence drives more and more software in our world, we will expose new flaws in the software development process. The programmer’s dilemma will be to decide what data to expose to the computer to help it make decisions. Intelligence (either artificial or human) is built up off experience and/or data available. If I just built my stock model off the Super Bowl indicator, rather than looking at other technical or fundamental factors, I would have easily outperformed myself and just about every other financial advisor out there. Maybe I should spend more time trying to justify that it is a meaningful coorelation, rather than just a coincidence. As of this writing, I still can’t do it, and I am ignoring it.
So don’t consider this post investment advice, just a thought about how machines will be making decisions in the future, and the flaws that will be programmed in to future software based on the decision by the programmer about what data is meaningful. There will be some mistakes made.
And if you see an investment adviser who touts an 80% success rate over the last 40 years, ask him or her if they know who won the Super Bowl last year.
After a longer than intended layoff, I got around to getting a new article published on Seeking Alpha. It is an update to the status of Rocky Brands, a maker of work and leisure boots that I have followed for several years. Its kind of an interesting story, a company that has been stagnant for awhile and finally appears to be picking up. For the full story, click on the link below.
Awhile ago a wrote a post regarding whether or not RSI looked like a meaningful indicator. After writing that, I decided to do an experiment to see if I could come up with a strategy based on the relative strength index (RSI) of a stock.
First off, there are many ways to calculate RSI – short term, long term mid term, but I decided to look at the short term 10 day moving average RSI – for now particular reason other than I had that data available. The strategy I used was when a stock goes above 70 on the RSI sell a call or buy a put with the assumption that it is likely to perform poorly going forward, and when the RSI falls below 30 buy a call assuming it will outperform the market going forward.
I set up a process that each week, my stock model sends me an email with candidate stocks where the RSI is above 70 or below 30, and I would pick from that list likely candidates and record it in a spreadsheet. An important point: Anytime I try out a new strategy I do it via ‘paper trades’ first – instead of actually making the trade, I just write it down or put it in a spreadsheet. I figure if I stumble across a winning strategy, it will work for years to come so I can afford to wait awhile before I actually put it in place. Since I tend to stumble across more losing strategies than winning strategies, this practice has served me well over the years.
The results? See below:
In a word.. Ouch! Had I actually pursued this strategy it would of been brutal. Note that for calls I sold (those were hypothetical covered calls), my profit calculations include the opportunity cost. For instance, for the June 23rd sell of Costco calls I would of collected $1000+, but at time of expiration the stock was $884 over the strike price + call price, so I missed out on that (assuming I had the stock to begin with).
For now, I have set this strategy aside, and am going to think about what I learned for a few months. Maybe I stumbled upon a successful strategy by doing the opposite of what I originally hypothesized. Perhaps anytime a stock goes above the 10 day RSI, you should buy, and when it goes below sell a call on it. Or maybe the time premium of options just makes it too hard to be successful. I don’t know. What I did reinforce for sure is the value of practicing via paper trades before committing to a strategy.
In late September I took a trip to Las Vegas for a friends birthday, and because I am too good at math to enjoy gambling in the casinos, I had to figure out a better plan. So I decided to take some of the code I have created and try to apply it to betting on football. In early September I spent a few hours putting together two models, one for NCAA football and one for the Pros.
My NCAA model looks at the top 25 ranked teams, and applies various factors to the team and their opponent, then spits out a prediction on whether or not the spread is too high. I placed some mock trades the first two weeks of the season, and had a success rate of 60% or higher each week.
My pro football model was a little different. I do a confidence pool each week with a bunch of friends where we assign a weight to each team we think will win, giving our most confident pick a 16 and our least confident pick a one. I had eight years of data from this pool, ran a few regressions, but didn’t find any strong correlations to find a winner. I finally decided on the method of comparing the consensus picks each week from this pool, comparing those to the spread and betting on the game with the highest divergence. Using this method, the first couple weeks performed pretty well – in the 60 – 70% accuracy range (against the point spread). To be clear, nobody in this pool is a clear expert (definitely including myself), so it is an interesting sample of football fan sentiment.
Off I went to Vegas, armed with a spreadsheet of recommended picks. The results:
- For the top 8 NCAA games that I thought were most incorrectly priced, I won 5 out 8 (62.5%). Not great, but still positive factoring in the 10% casino take. Even better, of the 5 games I actually bet on using my spreadsheet, 4 of them were winners (80%). One of my games I arrived too late to bet, and two others I laid off because I wasn’t at all familiar with the teams. I also made two ‘hunch bets’ on the under of two games, won 1 and lost 1 of those. So my spreadsheet did outperform my hunches.
- For the pro games, my model picked 5 games that were considered most mispriced by the spread. Of those games, it was correct on 4 out of 5 (80%). The good news for me was I only made 4 bets and won 4 out of 4 (100%). I laid off the one loser only because I predicted in our confidence pool against my spreadsheet, so I didn’t trust the spreadsheet. The bad news is I made another ‘hunch’ bet on a game, and it was wrong, so I still ended up 4 out of 5 (80%).
What did I learn from this experience? I am not sure. I only spent a few hours of spreadsheet work coming up with these formulas, so I find it hard to believe after that I found the magic formula to sports betting riches. And the sample set of 3 weeks is too small to make any firm determination. However, I do think its quite possible that the sports betting market is much more inefficient than the stock market, so with more analytics it may be possible to come up with a consistent winning strategy. I would guess most sports bets are made on emotion and hunches. During my research I found a whole lot of data that could be used to create algorithms to find patterns, and finding games where the spread is distorted by emotion.
So for now, I am going to toss the task of a sports betting model on the pile of software projects in my personal backlog. But you never know, it could be a whole new career ahead of me.
With my disposal of Starbucks, I decided I wanted another Consumer Discretionary stock with solid management. With the recent weakness in ATVI doe to lowered guidance, I figured maybe now is the time to get in.
Interestingly (and incorrectly, I believe), ATVI correlates most strongly with the technology sector:
Yes it is a software company, but it seems more tied to discretionary spending than technology. So I think the market is wrong about that one. And since the discretionary spending sector is pretty hot right now, I am hoping the market figures that out.
Final point is that the CEO of ATVI is Bobby Kotick, who has been at this a long time, and I believe has a vision of where he wants to take this company. I have historically done well investing in companies with standout CEO’s and I think I have to put Kotick in this category. Kotick is more of a businessman than a gamer, which I think is what you want from an investors point of view (but perhaps the opposite from a games point of view). Lots of opportunities for gaming in the coming years, from Virtual reality to E-Sports, and I am placing my bet that ATVI and Kotick will be the winner.
I have once again been busy in the Stock Market model laboratory, looking to optimize things and get a better handle on market trends. Over the past couple years my model has been one dimensional, basing predictions on a momentum / buy the dips strategy incorporating yield curve information. This model has worked out OK for Asset classes and individual stocks, but not meaningful for stock market sectors.
So the big improvement I just incorporated was to integrate multiple datasets into my model. What this means is I now look at historical technical data across multiple datasets, triangulating historical trends to come up with a prediction.
Lets walk through an example. Rocky Brands (RCKY) is a stock I have and it has some great correlations across 3 datasets:
- RCKY’s movement appears to coorelate inversely with the 10 month moving average price of oil. If RCKY moves up slower than the price of oil, its odds of outperforming the following month are increased:I am OK with this seemingly odd correlation, because it is a maker or work boots used by oil field workers. This influence is often called out in the earnings call, so its likely a valid coorelation.
- RCKY’s movement correlates with the inverse movement of the Vanguard Consumer Discretionary Sector index, on a 4 month rolling average basis:A slight coorelation, but if on a 4 month rolling average basis RCKY performs worse than the index, the next month shows a slight outperformance.
- RCKY’s movement coorelates with the inverse movement of the US Microcap Stock Index on a 4 month rolling average basis:Again, a slight coorelation, but RCKY is a Microcap stock, so it does tend to catch up if underperforming for a 4 month period.
Are these meaningful regressions? Thats a valid question. But if you add all these together, here is a summary regression for RCKY:
I think a 35% overall regression is meaningful. At least it should be better than guessing. Note that even though two of the three regressions show a current negative score, the overall score is still positive (2.45), because the index with the strongest coorelation was positive.
There are still some flaws to this model, and also still finding minor bugs on a weekly basis. But I am constantly making improvements to this model, and have a long roadmap of scheduled enhancements. I will continue to post updates on ideas I have and changes I implement. Any thoughts on my approach or questions are welcome.