As we start a new year, I felt it was time to hold a mythical business meeting with the mythical marketing department here at Vertical Financial Systems (VFS).  A lot has changed in technology since VFS was founded, and I got to thinking about what the future holds for small business technology.

So I asked myself the marketing department – where will the small business spending on technology services go in the next few years?  Here are some thoughts that came out of that ‘meeting’:

Spending on business websites may be almost nothing going forward.  For a few dollars a month, anybody can spin up a website in WordPress at GoDaddy or SquareSpace or any number of providers, using pre-defined templates and designs.  No hiring designers, coders, system admins, nothing.  Just hire an English major (or better yet an intern) to write up your pages and you are in business.  In the 2000’s this was a pretty decent business for a large number of people, but I can’t see why most businesses would spend a lot of money on that anymore.

The rise of smartphones has made that platform the application platform of choice.  Unfortunately, it is still painfully expensive (as compared to web development) to build out an application.  Plus, you have to write it essentially twice – once for Android, once for IOS (though few people are writing for Microsoft anymore).  Software is slowly coming out to make this development easier, but I think we are still a few years away from getting the costs significantly down.  The pain isn’t only in your client’s pocketbook – trying to be an expert on both platforms is no fun either.

But I do think the phone is the future application development platform.  Websites may evolve to be just brochure sites – provide info about your company and service, but not heavy functionality for existing customers.  Stuff like checking order status and billing may still live on the web for light customer service, but the heavy application development should be on the mobile device where native access to GPS, email, camera, etc  is available.  Interestingly, in a previous post I mentioned I got a Wink Hub, and the only way to control that is via a smartphone app – there is no website application or login where it can be managed.  Perhaps the future is just starting to arrive.

Content creation should still be a big market – but not static website content, custom, relevant content pushed to your customer.   Information pertinent to your customer should be selectively pushed to their inbox or smartphone – special offers, account notifications and the like.  So getting content created uniquely for each customer, based on what you know about the customer, seems like the big technology winner.  In order to engage customers, the fusion of marketing and technology will need to be stronger than ever before.  Thinking about this reminds me of the launch of Internet Explorer 4 (circa 1997) – where the big talking point was push technology where you could have web pages pushed to your PC.  The vision was fatally flawed, as web pages were just beginning to be dynamic, and frankly nobody really wanted all that content stored on their PC.  Content deliver has to be smart and targeted.

So to summarize, Web applications are not the big growth industry moving forward, and smartphone application development is still too expensive for many small businesses.  So while we are in this technological transition, it probably is a good time to build up on push technologies, and work to make your existing applications smarter about what your customer or lead is interested in, and  push relevant content to your customers.  This investment will payoff regardless of what platforms emerge in the future.

Lots to think about in the coming year.   Regardless of what new technologies or trends emerge, as always there will be a lot of new things to learn and decisions to be made.



Dan on January 18th, 2017

I have had a chance to tally up the results from my second year of solar panel power production.  When  I purchased the solar panels in 2015,  I had estimated a six year payback based on estimated production, and on the subsidies payed on power generated.  For 2016, I generated 3.45 megawatts, which was down 13% from 2015:


solar chart

Solar Power Generated 1/1/15 – 12/31/16


I am assuming the two primary factors for the production drop off is 2015 was a exceedingly sunny year make it hard to match, along with the gradual drop off in efficiency of the panels over time.  I will be surprised if following years show the same level of dropoff.  Interesting, this year showed better production numbers for the last 5 months of the year vs 2015, so that gives me hope that its not a panel issue.

In my payback analysis I had budgeted $2,098 of revenue a year, and my check for 2016 came in at $1,853.  So early on it looks like 6 years might be optimistic.  The other danger to the 6 year payback is the possible lowering of incentives on home power generation.  The money available always seems to be at risk,  but it appears possible that incentives will be less then originally estimated.   House Bill 1048 was introduced to finalize the incentives (see bill summary here) so a lot will depend on if this passes this year.  Interestingly, this bill does include incentives past 2020, which I did not include in my payback analysis, so that is a plus.

The maintenance of the panels has been a pleasant surprise, an occasional hosing off of the panels from ground level, and once or twice a year I get up on the roof and wash and squeegy them.   In the spring the pollen was pretty dense on the panels, and I did notice an increase in production after a cleaning.

On a related power note, while I was out checking power meters, I checked my usage meter and my usage was 8.4 megawatts, up 4% for the year.  This surprised me a bit – since in 2015 we ran more air conditioning and in 2016 I replaced a number of lights with LEDs.  However,  I will attribute the increase in usage to the additional time I spent working from home in 2016, thus leaving more lights and heat on (maybe the refridgerator door being opened more often?).  Anyway, I am targeting a reduction in usage for 2017.

In summary, I am  still pleased with the investment, and am now shooting for a 7-8 year payback which still isn’t too bad.  Solar panel prices continue to drop, and when I get around to needing to replace my roof the solar shingles look like they might be ready.  So I plan to be in the power business for many years to come.

A followup to my previous post on the programming churn.  In my many years of programming, the thing I find amusing is how programming has gone back and forth between centralized and distributed computing.  Talk about a churn, re-envisioning architectures with new languages where old is new.

For example, when I first started coding the mainframe was king – central processing with multiple dumb terminals.  COBOL, Assembler, CICS where the tools of the day.  In the mid 1980’s ‘client server’ computing was hailed as the new world order, taking advantage of the power of these new PCs that were starting to appear in businesses.  One problem – the infrastructure in the 1980’s made it complex (and slow) for two machines to talk to each other – networking was in its infancy, and cross company communication was via modems and phone lines.  In the early 1990s, client server started to mature with software like Fox-pro, Microsoft Access and Visual Basic, as well as better PC based databases.

Then in the mid 1990’s the Internet came along, and from the late 1990’s til early 2010’s, the pendulum swung back to the server model – where all the processing was done on the server and dumb html pages were served up to the browser.   The model was eerily similar to mainframes.  New development in client server pretty much died as browser based apps became the new normal.

Did I say client server died?  Wait, like a zombie it is back, this time in the form of javascript, javascript libraries, and JavaScript frameworks.   JavaScript running in the browser has evolved from simple form validation to a full blown programming platform.  First JQuery started the flood of code onto the browser, now frameworks like AngularJS or React have pretty much brought us the return of fat client computing.

In the 80’s the infrastructure was the bane of client server – this time around, its the language.  By most accounts JavaScript is a terrible language that was pressed into service because it was the only cross platform language available.  So to get around the ugliness of JavaScript, it has been somewhat abstracted by libraries that generate JavaScript.  In someways its a convoluted mess (this is a great post discussing the current challenges..).  The current tooling is painful also – the whole front end world seems disjointed.

Lets not even get to the current horror story of if you want to build a phone app, you have to build the app twice (with very little code sharing) – one for Android, one for IOS.

So will client server win out?  I think in a couple of years the current hot mess will be cleaned up as the language and tooling improves.  One complicating  factor will be the rise of the internet of things – which will bring forth a zillion tiny devices that can talk to each other.  Yes your washer can talk to your dryer, your lights can talk to your doorbell, etc.  This is going to drive a whole new architecture of applications.  These ‘client’ devices will be pretty dumb – though some maybe servers since all they supply is data (lightbulbs, switches), and some may be considered clients because they have a user interface (thermostats, garage door openers).  This is going to blur the lines between what is a client and what is a server.

So maybe the the whole concept or servers and clients will disappear, finally putting an end to the client server debate.  One thing for sure though – there will be new languages and software architectures to learn, to keep the programming churn alive and well.


Dan on December 21st, 2016

I was in a discussion with a family member who was heading for vacation, and he asked me if I had any reading recommendations.  Most years I rarely get through more than a book or two, but this year for some reason I plowed through a bunch of books.  I typically read histories and biographies, yet sprinkle in a little popular fiction when I want to give my mind a rest.

So since my year has been so productive, I decided I better put together a ‘best of list’ of the stuff I read this year.   Since the fiction I read is just pop culture stuff, and usually pretty old, I didn’t include it.  So I decided to limit it to history this year.

So, without further ado, here is my list of favorite history books I read this year.

 1.  Astoria: John Jacob Astor and Thomas Jefferson’s Lost Pacific Empire: A Story of Wealth, Ambition, and Survival

We all know about Lewis and Clark, and their exploration to the Pacific.  Soon after Lewis and Clark, in the early 1800’s, John Jacob Astor saw the business opportunity of this unexplored land and tried to build a huge monopoly in fur trading.    Even though I have grown up in the Pacific Northwest and traveled extensively around this region, I wasn’t aware of this interesting story.  As I sit here during a dreary December Day, I can only imagine the mental struggles this party went through.  In addition to a great adventure story, an interesting business angle too.


2. D Day Through German Eyes

A short book – only 140 pages – so it is a quick about the German soldiers stationed on the French coast on D-Day.  As part of a German propaganda effort the author interviewed a number of soldiers prior to the invasion gathering patriotic stories.  The author then tried to track down a number of these soldiers after the war to hear their story of what happened to them the day of the invasion.  An angle on WW2 you don’t often hear, and you realize how tragic the war was for everybody involved.


 My friend and avid history Buff Tony recommended this to me,  and really enjoyed it. This book covers the rise of the Navy post-sailing ships and the politics behind it.  If you enjoy 20th century history, this is a good introduction to where it all got started in the 1880’s.  A time of massive military buildup by Britain and Germany, and a shift in allegiances in Europe that lead to the WWI.

 An account of the US Ambassador to Germany’s posting in Berlin starting in 1933.  Interesting perspective of prewar Germany, though it does get a little soap-opera’ish with the stories of the daughter’s dalliances. It does provide a feel for what it must feel like to see a government steer towards tyranny.

A book recommended by a friend – The Grumpy Geezer.   Didn’t know much about it, but  really enjoyed it.  A true first person account of a pilot in Vietnam in the late 60’s.  The writer is not a professional writer,  but the story he tells of his experiences is engrossing, and give an interesting peek into how the soldiers felt about the war.  Reads like a journal, and I think anybody who is a pilot would especially enjoy it.

If you like my #2 pick Dreadnought, you will want to read this pseudo-sequel.  This picks up the story after World War I with the starting with Paris peace conference and the politics behind the Versailles Treaty.  I was expecting to read more about how the Allies shafted Germany, but most of the focus of the book is on all the other decisions that were made carving up countries.  It makes you realize that much of the decisions made in 1919 are responsible for many of the issues of today.  And if the world was given one historical ‘do-over’ in the 20th century, this period of time might be a good candidate.

For 2017 I have solicited input from several people, and have a large reading list ahead of me. My resolution this year is to try to stay away from WWI or WWII – as I typically gravitate to that era and need to broaden my horizons. There is lots of good history out there – so if you don’t feel like physically travelling anywhere next year, I recommend you travel in the fourth dimension and read some history.
Dan on December 8th, 2016

I have been waiting for several months now to jump into the smart home arena.  But I had been waiting for a clear winner to emerge.  Apple home, smartthings, Nest – a bunch of big players involved, but all seemed to have their drawbacks.

Finally,  I started seeing good reviews for the new Wink 2 hub, so I went ahead and took the plunge and bought one. The Wink Hub is a central unit that controls all sorts of smart switches, outlets, and other things. I wanted something compatible with ZWave devices, which is an emerging open standard that the Wink Hub supports.

For my initial setup, I also ordered a GE smartswitch which allows control via any ZWave hub like Wink2.  My first simple smarthome test for this was to use this switch on my porchlight and have it go on and off on schedule.  I was surprised at how easy it was to set everything up.  Plug in the Wink2 Hub, and hook it to your router.  Then I downloaded the phone app, created an account, and easily got the hub attached to my  home network via the phone app.

Once that was completed, On the phone app I clicked add a product, and it was super easy to add my GE light switch and set a schedule for on/off.

There are a ton of products you can get that hook into the hub.  You can also set up ‘robots’ that you can script actions for.  For example, if you hook up a motion detector, you could do something like ‘if motion is detected by the detector, and its after sunset’, turn on all the lights and turn on a radio.  You can also do things based on whether or not you are home, if you enable the app to sense where your phone is.

They also have an API, so because I am a geek I will be writing my own programs to potentially integrate into external things – such as temperature (obtained from weather underground via their API), or link to my solar panels to determine if the sun is out.  They have shades I can hook into your hub, so I can ping my solar panels every afternoon, and if the sun is out, close the shades in our living room.

Interesting stuff.  I do think this Wink 2 hub is a breakthrough product for bringing the smart home to the masses.  I am excited for all the new geeky things  I do with it.

Dan on November 30th, 2016

I typically have been a bottom up investor- look for companies I think are undervalued, and invest in the hope that others will see the bargain out there and raise the price.  This is called ‘Bottom up Investing‘, focusing on the individual market vs the stock sector or the asset class.

However, the market action of the last month has shaken my belief that bottom up investing is rational.   Since the Presidential Election, stocks in the Financial and Industrial sectors have taken off, on really no fundamental news.  For example, take a look at Columbia Banking (COLB)  and the Vanguard Financial ETF (VFS) over the last 3 months:


No news, no management changes, just a lot of stock being pumped into the financial sector because of the new administration.

Here is another example, FLIR Systems (FLIR) systems, in the industials sector, compared to the Vanguard Industrials ETF (VIS):


Again, no news, just happens to be in a sector where money managers are moving money too.  FLIR Systems has a number of industrial products centered around imaging and surveillance cameras – I hope this doesn’t reflect a belief that this administration is going to ramp up spending on domestic surveillance, but this isn’t a positive sign.

And what caused this huge shift in sector allocation?  My best guess is Trumps leaning towards rebuilding infrastructure and piling up bigger government debts.  That’s a lot of money moving around based on an administration whose policies are murky at best.

So, should I abandon bottom up investing and switch to top down investing?  No.  I won’t abandon it – I cling to the belief that undervalued companies will be recognized eventually.  However, I do think I need to increase my focus on watching asset class and sector indicators.  It does appear in this world where so much money is managed by a small pool of investors, there is is something to watching what the so called ‘smart money’ finds attractive.


Dan on November 16th, 2016

A while back, I saw a tweet from Elon Musk recommending the reading of the book ‘The Machine Stopped’, and was curious as I thought I was somewhat up on historical science fiction.  You can access a public domain copy of the book here:

Considering this book was written in 1909, the vision of the future by the author is pretty good.  Television, public air travel, the internet(?) all pretty well envisioned.  This is definitely worth a quick skim.





Dan on November 9th, 2016

As an avid reader of history, and a follower of the US political scene –  I have to rank last nights election of Donald Trump to the presidency as one of the most astonishing national events that I have witnessed.  Much will be written in the next days/months/years/decades about how this came about, so I may not have any unique insight, but I thought it would be worth recording my initial impression.

The fact that the winning candidate himself was so flawed is perhaps not the biggest political wonder.  Not to mention a popular incumbent Democrat, a 4.9% unemployment rate, and little inflation.  A Republican party which in large part refused to endorse their candidate, waiting for him to go away, is now running the show.  This defies all conventional thinking.  This also results in the huge power vacuum that was in the Republican party to make its way to the White House.  Nobody knows what to expect or who the power players will ultimately be – for that matter nobody really knows what Donald Trump stands for.  The Democratic opposition is temporarily shattered, decapitated overnight.  Trump’s policy predictions were necessarily incoherent to get him elected.  Nobody even knows if he wants the job of President.

Long term, the big takeaway from me is how nationalism is gaining momentum in the US and in most developed economies.  I think people did not vote for Donald Trump, they voted against the status quo and the elites, a vote driven by the growing wealth inequality.  Call it the ‘revenge of the flyover states’ – but the booming high tech wages and increasingly large Wall street profits caused this as much as any of the politicians messages.

This proves Brexit was not an anomaly – too me this election has the same signature as the Brexit – the middle class taking control and voting for change regardless of what the power elite recommend.  I would expect more nationalist and rightwing votes to happen in Europe, and possibly Japan.  Should this trend continue, I have little optimism for the world economy and the hope of world peace.

After writing this post, but before publishing it,  I ran across Glenn Greenwald’s post also supporting this theory.  It’s a great read to help try to make sense of this.

Shorter term, I look for a recession within the next year.  Nothing to do with whatever policies might arise out of the new administration, more due to the massive uncertainty in the next few months.  This has to put a lot of capital spending (across the world) on hold as corporations wait and see who fills the power vacuum.  Early reports are the widely expected December rate hike is off the table- if so, that shows the Fed is already going on the defensive.  I can’t see how this will not slow the the world economy.

The Obama administration came in on the slogan of ‘Hope and Change’, and in some ways he delivered on the promise.  Now perhaps the slogan should be changed to ‘Fear and Hope’,  as we perhaps need to fear the anger demonstrated by the middle class with this vote, and hope that this administration can surprise us and help build a better world.

Dan on October 26th, 2016

When I read these two articles, I was reminded of the Seinfeld episode where George realized that he has been making bad choices all his life – so he should do the opposite of what he thinks is right.

Case in point:  This article discussing how the budget deficit is now increasing due to stimulus points how how the government has been making bad choices.

The uptick, which had been projected last winter by government analysts, largely reflected the revenue loss from expiring tax breaks for businesses and individuals that Congress extended in December.

Wait a minute..  I thought lower taxes were supposed to trickle down and result in increased tax receipts – how could that happen?  And all you Keynesian government economists out there – what are you thinking?  We are supposed to stimulate the economy with deficit spending during downturns, and recapture that excessive spending during the good times.  Are these not good times?  How low to employment rates need to go before we are in good times?  If not now,  when?

Just like George – I think they need to rethink their choices.

Now lets move onto the Fed.  Now, the thinking is,  maybe low rates are hurting the economy!

In the minutes for the central bank’s Federal Open Markets Committee September meeting, several higher-ups at the Fed hinted that the policy of historically low interest rates was doing more harm than good to the economy.

After several years of keeping rates at or near zero to grow the economy – they now wonder if low rates are stifling growth?  Perhaps their models did not take into account the huge demographic of baby boomers who are trying to save for retirement are getting no interest income, causing them to try to save more and not stimulate the economy.  Well, at least now they are thinking about it.

I think the next step is to take a page out of the Costanza playbook.


Dan on October 12th, 2016

I was glad to see this post on the programming churn. I thought the authors point interesting regarding the lost productivity of switching languages and software development paradigms every couple years.

I agree with the point that the progress in software is logarithmic, that in the early  days the progression from one language to another was dramatic, and now the progression is much more incremental.  Also consider the amount of software code still running in a wide variety of languages, and you realize the lost intellectual capital caused by the obsoleting of languages.

I have learned to deal with the changing language landscape by watching and waiting.  I have a stable of languages that I am comfortable with, but I watch and sometimes tinker with new languages to see if they truly are radical improvements.  A good analogy would be car ownership – new cars come out every year with new features, but I don’t get a new car every year.  I watch to see what the emerging trends are, and when my current car has enough deficiencies over a perceived new car, I upgrade and ‘learn to drive’ the new car.

With software, ‘learning to drive’ a new language can be more painful than learning to drive a new car (well… unless you get in accident in your new car).  Every new language has new concepts that you have to grasp, and hours can be wasted going down dead ends and googling solutions.  Plus just because you upgrade, it doesn’t mean all your code written in the previous language was ‘traded in’ – all that code is still around, and in theory needs to be rewritten to the next language.

Having said all this, the changes incorporated in new software and tooling in the last 20 years have been remarkable.  Thing such as true component architecture, an API architecture, and unit test frameworks have all allowed us to make huge leaps in things we could do.  But do these improvements as a whole outweigh all the painful incremental language changes it took us to get here?  That’s a tough one.  I lean towards yes, it is worth it.  As painful as the constant innovation has been, the world of software has evolved to support a wider variety of functions than anybody ever dreamed.