Dan on April 26th, 2017

A few years ago a family member loaned me a manuscript written by my late Grandfather that had been sitting in a drawer for 50 years or so.  My Grandfather,  Harry J Owens, who grew up in Illinois, had a long fascination with Lincoln and had collected many stories about Lincoln over the years.   I had been thinking for several years that it would be a good experiment to publish an e-book, and this manuscript seemed like a great project to take on, so in 2014 I started putting this together.

Upon receiving the manuscript, which was about 250 typewritten pages, I had assumed it would be a matter of scanning the pages in via on OCR reader, correcting typo’s, and voila’ – I would be ready to go.  But as I got deeper into it – it turned out to be a much bigger project.  It took me about 2 years to get all the typed pages into electronic format (hey.. I do have a day job..) – I dedicated about an hour a week to it.  Once I got it all assembled, I decided the book was actually about three quarters done.  Many of the stories didn’t match up with the chapters, and many of the stories had not been organized into chapters.  So I had to put my editing hat on, and spend a few months rearranging the content and fill in some gaps that were missing.

Finally, earlier this year, after painstaking proofreading with the help of family members,  the book was ready to go.   For ease of publishing I went the Amazon Kindle publishing route, which limits it to Kindle but instantly gets you on Amazon.   Publishing this went pretty smoothly – the Kindle Direct Publishing site has plenty of help to get newbies like me through the process, and pretty soon my book was on Amazon.   There is a lot of things to think about when publishing a book – i.e. pricing, advertising strategy, cover art, etc.  Luckily, since this was primarily an experiment, I didn’t have to sweat over the details,  I just used by best judgement and  pushed the publish button.   I have not done any sales promotion or advertising, I may do so in the future just as an experiment in seeing how one would try to make money writing books.

Since this book was published primarily for the benefit of family, the next step was to created physically printed books.   After asking around, I settled on using Blurb to create physical books.  Naturally I thought this would be simple.. but it turns out to be another round of reformatting and organizing to get it print ready.  But after a few more months I  was able to hold in my hand a printed copy of the book.

Even though getting this book published took way longer than I expected, it has been an interesting and worthwhile process.   By editing this book, it has helped me to understand how to craft an idea into a book.   This book, while I had a lot of input into the final product, is not my ‘voice’, and is rooted back in the 50’s before I was born.   This has given me the confidence that maybe someday I could write a book of my own.  I have a few ideas kicking around in my head, and may start framing something out in the next few years. For now though, my literary expertise will remain with this blog.

Dan on April 12th, 2017

A while ago I did a post on my experience with AngularJS and the highs and lows of working with Angular.  AngularJS is a javascript framework written by Google to facilitate creating web applications.   It’s hard to believe I have been plugging away at Angular in my spare time for almost two years now.

How have I fared in these last two years?  A few months ago I decided to upgrade to Angular 1.5, which introduced some new concepts and verbs that make development better.  I have just finished my first 1.5 app (well – finished might be too strong of word) – you can see it here.  It may not look like much, but it is actually doing some cool things under the covers.  One thing my immersion into Angular has convinced me of is that the architecture is correct.   I have been using Web API as my backend, and WordPress as my frontend, and I have the process down on how to integrate all this stuff together.  For a non-graphical designer such as myself, incorporating WordPress has greatly reduced the time it takes to spin up sites around my applications.  In addition, I have built myself a WordPress plugin which simplifies the work to integrate AngularJS into WordPress.

The other interesting thing I have noticed after two years is I am starting to think in Angular when coding.  Until recently Angular code still hadn’t intuitively flowed from my fingers. Hopefully gone are the days where where when I want to do something – it takes 2 hours of Googling and 5 lines of code.  Now that I have a decent codebase of angular routines I have written, I find myself copying my code instead of searching the web.  I also am getting better at diagnosing the errors better, and thinking a few steps ahead in angular when building things.

Learning Angular has reminded me how learning new languages is such an interesting process.  It has been brutal at times, but it also has those ‘ahah’ moments where things start to make sense.  Some days I don’t have the mindset to deal with the frustrations of Angular, I just retreat into C# or SQL when I have the need to feel competent.

On a scale of 1 to 10 – where 1 is knowing no Angular and 10 is an expert – I would put myself at a solid 6.  I give myself an extra point above average because my experience with other languages does give me an edge.  But at least I am feeling confident and fairly productive now.

So where do I go now?  Angular 2.0/4.0 is out now,  and along with that comes typescript and various other utilities needed to implement Angular apps.  I did a quick experiment with the Angular CLI, and it looks like that will help deal with all the crap you have to deal with to learn and implement next generation Angular Apps.  Still, its a large jump from Angular 1.5, and frankly, I am in no hurry to leap.  I am going to bask in the confidence of my 1.5 abilities for awhile, continue to hone my 1.5 skills, and write some cool things.  But I still think in the long term, Angular is going to win a lot of developers hearts and be a leading platform in the future.


Dan on March 29th, 2017

I have been musing of late around thinking about the algorithm behind self driving cars.  For all you programmers out there – the self driving car is a great thought experiment on how to devise a technical solution to an existing problem (assuming you think humans driving cars is a problem).   If presented with the problem – how would you attack it?

Here were my thoughts.  Assuming a car has over 20 different sensors,  make each sensor responsible for a task – and each sensor elevates a recommendation to a central processor.  For instance, ‘right front camera detects pedestrian on right’,  and ‘front camera detects stop sign at 100 ft’.  The central processor would then weigh the various inputs and make a decision on how to proceed.

So that’s one layman’s thought. I ran across this video that has the answer on how Google approaches the problem.  According to the video presenter, the algorithm first processes all the inputs, and detects known shapes, then calculates the various shapes predicted movements, then makes a decision.

Google has invested a lot of money into machine learning, so it wouldn’t surprise me if their algorithm is more dependent on machine learning than say Tesla, BMW or others.  The algorithm may also be different for self driving trucks, since their use case is more highway oriented, with fewer variations in objects, and hopefully more defensive. Also, self driving trucks will likely have podding technology, allowing trucks to virtually chain together to reduce wind drag and improve mileage.

Whatever the case, it seems certain that in 5 years, when you buy a car, a significant factor will be ‘what operating system does it have?’.   Will different operating systems make different moral judgments about what to do an emergency?

It could also be that the best algorithm hasn’t even been designed yet,  as we are in the early stages of this technological transformation.  I hope all technology companies add the question ‘how would you design a self driving car?’ to its list of interview questions when hiring software developers.  Seems like the ultimate programming problem of our day.

 


Dan on March 19th, 2017

Back in July 2015 I wrote a post about how Apple looks like the next Microsoft.   I think the last year and a half as proved my theory to be on track, and I am seeing more and more articles in agreement.

The latest comparison?  In the early 2000’s, Microsoft was the leader in smartphones with its Windows CE platform.  Along came Apple with a better product and pretty much killed it off.   A few years ago, Apple had the lead with Siri, the voice virtual assistant that was far ahead of the competition.  Now, Siri may be running a distant 3rd, behind Google Assistant, and more importantly Amazon’s Echo.

How could Apple let Amazon take over the market for virtual assistants?  Echo now has a huge number of partners, and this years CES was dominated by Echo.  The Echo now has so many partners, I don’t see how Siri can catch up now.

I heard an interesting theory as to why successful companies such as Microsoft and Apple find it so hard to follow up a big hit with a successor.  The theory was that there are so many companies trying to find the next big thing, that it is rare for the same company to win more than once.  And companies that have an installed base to maintain and monetize are further distracted from innovating by managing their existing product line.  The theory assumes a fair bit of success involves plain old luck – though luck can be defined as ‘when preparedness meets opportunity’.

So I will continue to watch this interesting parallel playing out with Apple.   With all the big players in the consumer electronics market – I will be curious to see if Apple can find magic once again.

Dan on March 3rd, 2017

Snap (SNAP), best known for it’s SnapChat app, is the latest tech IPO, the largest since 2014.  I have never used the app, don’t really follow the company,  and for the most part feel like most people that this is way overvalued and may end up with the same painful stock price route of GoPro.   However, I did have a couple thoughts on  yesterdays IPO.

tacobell
  1. Yes, it’s likely overvalued, but they do have an innovative marketing angle which has the potential to unleash something big.  Snapchat allows you to put various filters on the picture of yourself, to make it more amusing to whomever you send the picture to.   Key to this is their ability to support sponsored filters – filters branded to the company developing the filter – an example of which is the Taco Bell filter on the right. Sure, its kinda silly (but this whole app that they built a 33 billion market cap around is pretty silly),  but as far as building brand awareness and ‘hipness’, it seems to me to be pretty unique.  Any company looking to strengthen its relevance should find this more intriguing than any banner or pop up ad.  Teenagers are the primary demographic for this app, so  it would be an interesting ad choice when promoting a movie and its characters, or other product aimed teens.  It is sure more interesting than  TV advertising, banner ads or popups.  It will be interesting to see if this gets traction, or if the founders of Snapchat find other innovations in advertising.
  2.   Economist Steve Liesman of CNBC had an interesting offhand comment when talking about the IPO.   He essentially stated this exemplifies what is wrong with the economy and why growth is so slow.  Snap is a company getting a 31 billion market cap,  bigger than Paccar and Nordstrom combined.   Snapchat has 1,200 employees – Paccar and Nordstrom combined have 95,000.   Snapchat has 550 million in long term assets (i.e. plant and equipment) – Paccar / Nordstrom have 9.5 billion in long term assets.  The new economy requires very little capital investment – which is great for 1200 people, but it siphons away capital from companies that spread it across the economy.  Maybe this helps explain why the Fed can’t get growth going even at near zero interest rates.

I won’t be following Snap closely, but it may turn out to be in interesting story going forward. It will be interesting to see if this is the next Facebook or the next GoPro.

Dan on February 21st, 2017

I have been critical of Microsoft’s strategy with Visual Basic for awhile now – it has been treated as a second class .net language, slowly falling behind C# as far as new features and functionality.  The official word was it would be separate but equal, but reading between the lines I felt it was on its way out.

Recently, Microsoft came out with a blog post clarifying the future of Visual Basic.

It appears Microsoft is going to position Visual Basic as a simpler, more straight forward .net language with the focus on new developers.  I think this is a great move.  C# is a great language, but with all the different features and constructs, it can be overwhelming for even seasoned developers.  We definitely need a language that is the gateway to c#, and  I agree that it makes sense to keep the feature set limited so new developers can understand the basics of .Net.  The world doesnt need two languages that do the same thing (which is how VB and C# have been competing).

An interesting quote from the article:

An interesting trend we see in Visual Studio is that VB has twice the share of new developers as it does of all developers. This suggests that VB continues to play a role as a good, approachable entry language for people new to the platform and even to development.

I have always enjoyed VB, and I also enjoy C#.  Because I jump back and forth between legacy apps (VB) and new development (C#), I don’t have a problem switching between the two.  So new developers that start in VB, once they understand .net and want more power, can start to switch to C#.  But if the developers are just running simple programs, or building forms over data, they will do fine in VB and can stay in VB.  Any C# developer that says they can’t code in VB is lying.. (and I have heard a few developers claim this..)  And its not terribly difficult to migrate VB.Net to C# if an application grows.

So kudos to Microsoft for defining this strategy.  I hope it works out for Microsoft, and they once again are able to capture the hearts and minds of developers.

Click on the link below to read the full blog post.

The .NET Language Strategy


Dan on February 9th, 2017

I am a fan of Twitter the product – I think it provides a unique niche for news and events in a time where more people are getting engaged in social networks.  However, this week I finally threw in the towel and closed out my position just barely breaking even.

Why? I am reminded of a quote I heard from Jordan Ritter on Triangulation.  To paraphrase, ‘the most important thing for an organization is a good team. A great team can be successful with an average product, but a bad  team will screw up the next Facebook every time’.

That’s how I feel about Twitter – great product, but the current team is not making the moves to make it work.   I had hoped when wunderkind Jack Dorsey returned as CEO, he would provide the leadership necessary to make the team move the product forward.  After a year and a half, I am seeing few signs of improvements.  And if you looked at the stock based compensation numbers for its employees, it looks closer like a pre-bubble bust company of the early 2000’s.  Factor in that earlier in the year several companies looked into buying Twitter, and all walked away after due diligence.

Hopefully someone will buy out the company and build a great product and make product enhancements to better monetize the platform and make it easier for new users to get engaged with.  I still think this platform has potential to be great, but I am on the sidelines until I see signs of a great team in place.

As we start a new year, I felt it was time to hold a mythical business meeting with the mythical marketing department here at Vertical Financial Systems (VFS).  A lot has changed in technology since VFS was founded, and I got to thinking about what the future holds for small business technology.

So I asked myself the marketing department – where will the small business spending on technology services go in the next few years?  Here are some thoughts that came out of that ‘meeting’:

Spending on business websites may be almost nothing going forward.  For a few dollars a month, anybody can spin up a website in WordPress at GoDaddy or SquareSpace or any number of providers, using pre-defined templates and designs.  No hiring designers, coders, system admins, nothing.  Just hire an English major (or better yet an intern) to write up your pages and you are in business.  In the 2000’s this was a pretty decent business for a large number of people, but I can’t see why most businesses would spend a lot of money on that anymore.

The rise of smartphones has made that platform the application platform of choice.  Unfortunately, it is still painfully expensive (as compared to web development) to build out an application.  Plus, you have to write it essentially twice – once for Android, once for IOS (though few people are writing for Microsoft anymore).  Software is slowly coming out to make this development easier, but I think we are still a few years away from getting the costs significantly down.  The pain isn’t only in your client’s pocketbook – trying to be an expert on both platforms is no fun either.

But I do think the phone is the future application development platform.  Websites may evolve to be just brochure sites – provide info about your company and service, but not heavy functionality for existing customers.  Stuff like checking order status and billing may still live on the web for light customer service, but the heavy application development should be on the mobile device where native access to GPS, email, camera, etc  is available.  Interestingly, in a previous post I mentioned I got a Wink Hub, and the only way to control that is via a smartphone app – there is no website application or login where it can be managed.  Perhaps the future is just starting to arrive.

Content creation should still be a big market – but not static website content, custom, relevant content pushed to your customer.   Information pertinent to your customer should be selectively pushed to their inbox or smartphone – special offers, account notifications and the like.  So getting content created uniquely for each customer, based on what you know about the customer, seems like the big technology winner.  In order to engage customers, the fusion of marketing and technology will need to be stronger than ever before.  Thinking about this reminds me of the launch of Internet Explorer 4 (circa 1997) – where the big talking point was push technology where you could have web pages pushed to your PC.  The vision was fatally flawed, as web pages were just beginning to be dynamic, and frankly nobody really wanted all that content stored on their PC.  Content deliver has to be smart and targeted.

So to summarize, Web applications are not the big growth industry moving forward, and smartphone application development is still too expensive for many small businesses.  So while we are in this technological transition, it probably is a good time to build up on push technologies, and work to make your existing applications smarter about what your customer or lead is interested in, and  push relevant content to your customers.  This investment will payoff regardless of what platforms emerge in the future.

Lots to think about in the coming year.   Regardless of what new technologies or trends emerge, as always there will be a lot of new things to learn and decisions to be made.

 

Tags:

Dan on January 18th, 2017

I have had a chance to tally up the results from my second year of solar panel power production.  When  I purchased the solar panels in 2015,  I had estimated a six year payback based on estimated production, and on the subsidies payed on power generated.  For 2016, I generated 3.45 megawatts, which was down 13% from 2015:

 

solar chart

Solar Power Generated 1/1/15 – 12/31/16

 

I am assuming the two primary factors for the production drop off is 2015 was a exceedingly sunny year make it hard to match, along with the gradual drop off in efficiency of the panels over time.  I will be surprised if following years show the same level of dropoff.  Interesting, this year showed better production numbers for the last 5 months of the year vs 2015, so that gives me hope that its not a panel issue.

In my payback analysis I had budgeted $2,098 of revenue a year, and my check for 2016 came in at $1,853.  So early on it looks like 6 years might be optimistic.  The other danger to the 6 year payback is the possible lowering of incentives on home power generation.  The money available always seems to be at risk,  but it appears possible that incentives will be less then originally estimated.   House Bill 1048 was introduced to finalize the incentives (see bill summary here) so a lot will depend on if this passes this year.  Interestingly, this bill does include incentives past 2020, which I did not include in my payback analysis, so that is a plus.

The maintenance of the panels has been a pleasant surprise, an occasional hosing off of the panels from ground level, and once or twice a year I get up on the roof and wash and squeegy them.   In the spring the pollen was pretty dense on the panels, and I did notice an increase in production after a cleaning.

On a related power note, while I was out checking power meters, I checked my usage meter and my usage was 8.4 megawatts, up 4% for the year.  This surprised me a bit – since in 2015 we ran more air conditioning and in 2016 I replaced a number of lights with LEDs.  However,  I will attribute the increase in usage to the additional time I spent working from home in 2016, thus leaving more lights and heat on (maybe the refridgerator door being opened more often?).  Anyway, I am targeting a reduction in usage for 2017.

In summary, I am  still pleased with the investment, and am now shooting for a 7-8 year payback which still isn’t too bad.  Solar panel prices continue to drop, and when I get around to needing to replace my roof the solar shingles look like they might be ready.  So I plan to be in the power business for many years to come.

A followup to my previous post on the programming churn.  In my many years of programming, the thing I find amusing is how programming has gone back and forth between centralized and distributed computing.  Talk about a churn, re-envisioning architectures with new languages where old is new.

For example, when I first started coding the mainframe was king – central processing with multiple dumb terminals.  COBOL, Assembler, CICS where the tools of the day.  In the mid 1980’s ‘client server’ computing was hailed as the new world order, taking advantage of the power of these new PCs that were starting to appear in businesses.  One problem – the infrastructure in the 1980’s made it complex (and slow) for two machines to talk to each other – networking was in its infancy, and cross company communication was via modems and phone lines.  In the early 1990s, client server started to mature with software like Fox-pro, Microsoft Access and Visual Basic, as well as better PC based databases.

Then in the mid 1990’s the Internet came along, and from the late 1990’s til early 2010’s, the pendulum swung back to the server model – where all the processing was done on the server and dumb html pages were served up to the browser.   The model was eerily similar to mainframes.  New development in client server pretty much died as browser based apps became the new normal.

Did I say client server died?  Wait, like a zombie it is back, this time in the form of javascript, javascript libraries, and JavaScript frameworks.   JavaScript running in the browser has evolved from simple form validation to a full blown programming platform.  First JQuery started the flood of code onto the browser, now frameworks like AngularJS or React have pretty much brought us the return of fat client computing.

In the 80’s the infrastructure was the bane of client server – this time around, its the language.  By most accounts JavaScript is a terrible language that was pressed into service because it was the only cross platform language available.  So to get around the ugliness of JavaScript, it has been somewhat abstracted by libraries that generate JavaScript.  In someways its a convoluted mess (this is a great post discussing the current challenges..).  The current tooling is painful also – the whole front end world seems disjointed.

Lets not even get to the current horror story of if you want to build a phone app, you have to build the app twice (with very little code sharing) – one for Android, one for IOS.

So will client server win out?  I think in a couple of years the current hot mess will be cleaned up as the language and tooling improves.  One complicating  factor will be the rise of the internet of things – which will bring forth a zillion tiny devices that can talk to each other.  Yes your washer can talk to your dryer, your lights can talk to your doorbell, etc.  This is going to drive a whole new architecture of applications.  These ‘client’ devices will be pretty dumb – though some maybe servers since all they supply is data (lightbulbs, switches), and some may be considered clients because they have a user interface (thermostats, garage door openers).  This is going to blur the lines between what is a client and what is a server.

So maybe the the whole concept or servers and clients will disappear, finally putting an end to the client server debate.  One thing for sure though – there will be new languages and software architectures to learn, to keep the programming churn alive and well.