My latest article for Seeking Alpha was just posted – titled eMagin Is Still Just Potential. This covers a company I followed for a long time -eMagin. They make high quality micro-displays, and have been trying to crack the virtual reality market for a long time. I have been waiting / hoping for them to find success, but so far it has been elusive. Recent developments have knocked the stock price down below $1, at a time where it would appear it is starting to perform. It’s an interesting story (I think..).
After 9 years I have decided to replace the old Puget Investor / Invest.vfsystems.net VFS Rating. The VFS Rating uses various fundamental stock metrics to try to determine if the the stock is under or overvalued. It has never been as accurate as I had hoped, for a variety of reasons.
As of 9/30/18, here are final results of the VFS Rating:
Note that over the life of the VFS Rating, on stocks rated 10 would on average outperform stocks rated 1 by .09% a month (.01x above represents the monthly premium, so on average a vfs 10 rating would equate to .10% a month return, where a 1 rated stock would return .01%). There is high variability in these numbers, so I think this might be overstating the accuracy of the rating, but at least it is a positive correlation.
Note that on average the stocks in the portfolio outperformed the market by .12% a month, which annualized is over 1% a year over the S&P 500, which I like. However, that number cannot be credited to the VFS Rating, that number is more a function of the stocks I chose to add to the portfolio. I would like to think the over 1% number is due to my brilliance in picking stocks, but I wonder what that number would look like had I not included Amazon in my portfolio during that time.
I am now taking what I have learned from the calculation of the VFS Rating, and using some more modern programming techniques to improve it. Instead of a VFS Rating, I hope to come up with a target valuation of a stock, from which I can discern the discount or premium the stock market has applied to the stock based on fundamental indicators. This is very tough, because the valuation of a stock (in my current opinion) is primarily based on the growth prospects over the next 10 years – which is a very subjective (and inaccurate) forecast. But I have some ideas, and if I set up my measurements right I should be able to see if it can be done successfully.
In the meantime, I plan to rely on my technical indicator that is on the invest.vfsystems.net site. This indicator has shown some promising early results. Since November 2017, the performance of this indicator can be described as Y=.077x + .294. This formula is a little convoluted, but my technical score goes from a -10 to a 10, so a stock with a score of 10 out performs a stock with a score of -10 by approximately 1.5% a month. Again, this is a small sample set (13 months), but it is promising.
Finally, to repeat what I have told my friends – if you do plan to invest in individual stocks, the most important thing is to get your measurement systems set up. There are so many wrong strategies out there its important to measure your strategy and make sure you are beating whatever index you would invest in as an alternative. I have a number of measurements set up, so hopefully now as I embark on this new formula, it will show positive results. If not.. it could be back to the drawing board once again.
One of the biggest question marks in investing over the last 10 years will be what happens to retail in the online age. I have been following Norstrom’s off and on over the last 20 years as it is a high quality retailer with dedicated family management. However, in the last few years I have been out of Nordstrom’s as I have been avoiding retail all together until more clarity as to the future of retail is visible to me.
One of the statistics that most impressed me about Nordstroms was it was growing it’s web presence at something like 20% a year for the last few years. It seemed to be one of the few retailers that has a good online strategy, and is keeping itself Amazon-proof.
However, that may be a curse in disguise. As per this article on Business insider, this expanding retail presence is causing a problem, with digital sales now accounting for 30% of Nordstrom’s sales.
So they may be successfully fighting off Amazon (for now) – but it is clear they are cannibalizing their in store sales. Online sales grew 20%, while overall sales grew around 3%.
Is this bad? In Nordstrom’s case, I am now changing my mind, and I think this is a sign of trouble. In my opinion, Nordstrom has differentiated itself by having premium stores, and excellent customer service. Translating this differentiation to the web is a whole new challenge, other than the oft mentioned ease of returning things to Nordstrom. Competing for a premium experience on the web is going to be difficult, especially with Amazon and other competitors (i.e. upstart Stitchfix) building a model around personalization.
My prediction is that Nordstrom’s will have to reduce their average store size, as it probably no longer makes sense to have the same level of inventory in store when people are increasingly buying online. If this is the case, I think the cost of shrinking and retrofitting stores will exceed the (assumedly) reduced rent cost for the next few years, all the while spending to fight off online competitors.
So while I think Nordstrom’s is still a great brand and premium shopping experience, I am staying away. I think the moves the company needs to make in the future may further cloud how brick and mortar retailing needs to change in order to survive in an online world.
I have been spending a lot of time lately analyzing regression and related factors. As my stock market model creeps toward the realm of artificial intelligence, making decisions based on related data sets, the biggest decision as a programmer / data scientist is, how do I know a coorelation between data is predictive vs. just a coincidence?
So if you are reading this post to try to find the answer to that question, you can stop reading now, because I have nothing for you. Right now I am just using my gut (Feeding a computer datasets is probably a lot like parenting – only tell your children the things they need to know, don’t confuse them with too much data as they build their moral compass).
Anyway, as I think about all these issues, every time I think I find a meaningful correlation, I think back to the Super Bowl Indicator. Are you looking for a sure-fire predictor of stock market performance as measured by the S&P 500, with a track record of being correct 80% of the time in last 50 years?
Then maybe your strategy should be if an AFC team wins the Super Bowl, go all in. If the NFC wins, take all your savings out of the market. Of course I am not serious, but it is curious that this is one of the best predictors out there. Since the year 2000, it has a 66% accuracy
I bring this up because this is a great example of the problem faced by software developers/data scientists such as myself. As artificial intelligence drives more and more software in our world, we will expose new flaws in the software development process. The programmer’s dilemma will be to decide what data to expose to the computer to help it make decisions. Intelligence (either artificial or human) is built up off experience and/or data available. If I just built my stock model off the Super Bowl indicator, rather than looking at other technical or fundamental factors, I would have easily outperformed myself and just about every other financial advisor out there. Maybe I should spend more time trying to justify that it is a meaningful coorelation, rather than just a coincidence. As of this writing, I still can’t do it, and I am ignoring it.
So don’t consider this post investment advice, just a thought about how machines will be making decisions in the future, and the flaws that will be programmed in to future software based on the decision by the programmer about what data is meaningful. There will be some mistakes made.
And if you see an investment adviser who touts an 80% success rate over the last 40 years, ask him or her if they know who won the Super Bowl last year.
In late September I took a trip to Las Vegas for a friends birthday, and because I am too good at math to enjoy gambling in the casinos, I had to figure out a better plan. So I decided to take some of the code I have created and try to apply it to betting on football. In early September I spent a few hours putting together two models, one for NCAA football and one for the Pros.
My NCAA model looks at the top 25 ranked teams, and applies various factors to the team and their opponent, then spits out a prediction on whether or not the spread is too high. I placed some mock trades the first two weeks of the season, and had a success rate of 60% or higher each week.
My pro football model was a little different. I do a confidence pool each week with a bunch of friends where we assign a weight to each team we think will win, giving our most confident pick a 16 and our least confident pick a one. I had eight years of data from this pool, ran a few regressions, but didn’t find any strong correlations to find a winner. I finally decided on the method of comparing the consensus picks each week from this pool, comparing those to the spread and betting on the game with the highest divergence. Using this method, the first couple weeks performed pretty well – in the 60 – 70% accuracy range (against the point spread). To be clear, nobody in this pool is a clear expert (definitely including myself), so it is an interesting sample of football fan sentiment.
Off I went to Vegas, armed with a spreadsheet of recommended picks. The results:
- For the top 8 NCAA games that I thought were most incorrectly priced, I won 5 out 8 (62.5%). Not great, but still positive factoring in the 10% casino take. Even better, of the 5 games I actually bet on using my spreadsheet, 4 of them were winners (80%). One of my games I arrived too late to bet, and two others I laid off because I wasn’t at all familiar with the teams. I also made two ‘hunch bets’ on the under of two games, won 1 and lost 1 of those. So my spreadsheet did outperform my hunches.
- For the pro games, my model picked 5 games that were considered most mispriced by the spread. Of those games, it was correct on 4 out of 5 (80%). The good news for me was I only made 4 bets and won 4 out of 4 (100%). I laid off the one loser only because I predicted in our confidence pool against my spreadsheet, so I didn’t trust the spreadsheet. The bad news is I made another ‘hunch’ bet on a game, and it was wrong, so I still ended up 4 out of 5 (80%).
What did I learn from this experience? I am not sure. I only spent a few hours of spreadsheet work coming up with these formulas, so I find it hard to believe after that I found the magic formula to sports betting riches. And the sample set of 3 weeks is too small to make any firm determination. However, I do think its quite possible that the sports betting market is much more inefficient than the stock market, so with more analytics it may be possible to come up with a consistent winning strategy. I would guess most sports bets are made on emotion and hunches. During my research I found a whole lot of data that could be used to create algorithms to find patterns, and finding games where the spread is distorted by emotion.
So for now, I am going to toss the task of a sports betting model on the pile of software projects in my personal backlog. But you never know, it could be a whole new career ahead of me.
I have once again been busy in the Stock Market model laboratory, looking to optimize things and get a better handle on market trends. Over the past couple years my model has been one dimensional, basing predictions on a momentum / buy the dips strategy incorporating yield curve information. This model has worked out OK for Asset classes and individual stocks, but not meaningful for stock market sectors.
So the big improvement I just incorporated was to integrate multiple datasets into my model. What this means is I now look at historical technical data across multiple datasets, triangulating historical trends to come up with a prediction.
Lets walk through an example. Rocky Brands (RCKY) is a stock I have and it has some great correlations across 3 datasets:
- RCKY’s movement appears to coorelate inversely with the 10 month moving average price of oil. If RCKY moves up slower than the price of oil, its odds of outperforming the following month are increased:I am OK with this seemingly odd correlation, because it is a maker or work boots used by oil field workers. This influence is often called out in the earnings call, so its likely a valid coorelation.
- RCKY’s movement correlates with the inverse movement of the Vanguard Consumer Discretionary Sector index, on a 4 month rolling average basis:A slight coorelation, but if on a 4 month rolling average basis RCKY performs worse than the index, the next month shows a slight outperformance.
- RCKY’s movement coorelates with the inverse movement of the US Microcap Stock Index on a 4 month rolling average basis:Again, a slight coorelation, but RCKY is a Microcap stock, so it does tend to catch up if underperforming for a 4 month period.
Are these meaningful regressions? Thats a valid question. But if you add all these together, here is a summary regression for RCKY:
I think a 35% overall regression is meaningful. At least it should be better than guessing. Note that even though two of the three regressions show a current negative score, the overall score is still positive (2.45), because the index with the strongest coorelation was positive.
There are still some flaws to this model, and also still finding minor bugs on a weekly basis. But I am constantly making improvements to this model, and have a long roadmap of scheduled enhancements. I will continue to post updates on ideas I have and changes I implement. Any thoughts on my approach or questions are welcome.
I still think the best solution to power storage is hydro. It seems a lot more scalable than lithium-ion or other chemical oriented solutions. That’s why I was heartened to see what Scotland has planned as discussed in this article. Given Scotland’s growing wind power sources, building battery farms doesnt seem to make sense. So just use Loch Ness as your reservoir. Here is a great view of the plan:
This seems like an obvious solution to the biggest problem with renewable energy, which is its inconsistency in generation. This solution allows the ‘banking’ of energy in the upper reservoir, so when the sun isn’t shining or the wind isn’t blowing, power can still be generated consistently using the hydro generators.
I hope this is just the start of this movement as renewal energy projects grow, and this becomes the de-facto standard for energy storage and distribution.
Now that Google has announced it is stopping development of AngularJS, I have started to think about what I want to use for future projects. Its been a year since my post on my Angular breakthrough, where I now feel like I am thinking in Angular when coding. So do I abandon ship and migrate to the latest and greatest framework?
I still have numerous applications I maintain in AngularJS, and I see no reason to migrate them to a different framework. Some are kind of slow, but acceptable, and I am not ready to take on the mental anguish of a framework migration. Just because AngularJS will not get any new features doesn’t mean it will be broken. Microsoft Access hasn’t received any new meaningful features in 10 years and it is still a useful product.
However, if I was to start on a new project, what framework would I pick? It depends. If I had an enterprise customer looking to build a robust app, I probably would pick Angular 2/4/5. That would be much more performant, and there will likely be Angular developers around for years to come, where coders in AngularJS might be has hard to find as COBOL or Access programmers.
If I was starting a small project for myself, I would probably continue to use AngularJS, since I am confortable in that and can crank out code pretty quickly. And someday it may be easily upgradeable.
I am also hearing alot of buzz about Vue – https://vuejs.org/, but that seems comparable to AngularJS, so whats the point. If I didn’t know know AngularJS, I may have considered learning Vue, but at this point it is too late.
So even though AngularJS is being sunsetted, I am enjoying being competent in AngularJS so it’s hard for me to consider abandoning it. But I will likely try a Angular 5 project later this year to see if I am ready for another learning curve spike. If I do, I have to decide if I want to also learn TypeScript or Dart. I hear Google writes a lot of it’s Angular in Dart, maybe that will be the long term winner. Problem is most of the examples online are in TypeScript, which is an issue.
Over the last few months I have been working with a client to help move them to Azure, and from that I have gotten my first immersion into Microsoft Azure. Azure is Microsoft’s Cloud platform, the future of the company as Windows becomes less and less profitable. My client’s project primarily consists of setting up a Virtual Machine (VM) out there, and essentially moving the whole environment out there. Thats the simpler approach, however IT is still on the hook for managing the VM, applying updates, etc. so its not the optimum long term plan. It also isn’t cheap – the monthly costs are not much less than buying the hardware yourself, but you do offload the headache of dealing with hardware.
The process of migrating to a VM, while still complicated, is no where near as complicated as using Azure’s Platform as a Service (PAAS) or Software as a Service (SAAS) offerings. As a side project, I have been working on a personal project to migrate some batch tasks I run every night on my home computer to run the process out at Azure. This will allow me to manage these jobs even when I am not home, and free up my home computer in the evening.
I am surprised that something as simple as this is amazingly hard to get started with. After attempting to find help in the Azure portal, I get nowhere, so I go out to Google and search for help. Lots of results here, all different, with many results for features that no longer exist with Azure. OK, so I limit my Google search to only articles posted within 90 days, and settle in using WebJobs. First, I guess I have to set up an App Service Plan, then I have to set up an App Service inside the App Service Plan, then I get to setting up WebJobsin that app service. I set up a webjob, but its still not clear how to hook a webjob up to any code.
So back out to Google, and I end up finding a WebJobs SDK, which is a Visual Studio Solution made up of 17 projects. Hopefully after understanding how this thing works, I will find out that WebJobs is the answer. I may find out this is for something else, and that Azure functions are the way to go. So many options, so many dependencies, and so many changes, I dont know how anybody can really understand the whole Azure world.
Here is a screenshot of all the services available:
This picture has the compute options expanded, but each of the other sections expand also. By my count, over 200 different services available, each with it’s own set of dependencies and concepts to understand. To use these, you pretty much have to think like the developer who wrote these. And it seems like each developer thinks about each service a little differently. I just don’t get a cohesive feeling about all this yet.
I am not an Azure hater, and I think it has all (and more) of the features I need, but the discoverability of it all is lacking in my opinion. Granted it is still an immature platform, and Amazon’s AWS has its share of oddities. But it is hard to know where to get started and how to do basic things. Perhaps that is by design, as the cloud displaces the need for IT staff, perhaps this is the plan to start a whole new industry of Azure consultants. Seems like it could be a booming business.
So I am sticking with it, and I hope to have some of my processing on Azure in the near future. If I can do that, and retain my sanity, I will feel like I have made a big leap to the cloud.
I was talking with a co-worker recently about data visualization, and he showed me a site he ran across with the latest and greatest in Graphing. D3 (Data Driven Documents) has some pretty amazing ways to chart data – check out the gallery of examples here.
With all the expansion of data we have seen over the last few years, its nice to see display technologies keeping up. Now the hard part is in trying to figure out which kind of chart to use when displaying data – perhaps we have too many options.