(This site links to Amazon or other affiliate links - Info Here )

Friday, January 17, 2020

When good clean software architecture perhaps may not be the best design at the moment


In a galaxy not too far away...Imagine our fine products
Imagine we have two similar products.  Let's call them Old Product and New Product  - both maintained by the same set of teams.
New Product relies on libraries from the Old Product. The New Product has been mostly a new interface and SIMT it relied on all of the libraries for the Old Product to act as core libraries.

The dependency is a little painful but all the core libraries were shared so they were at least maintained in one place.

Build it well and they will come?
Let's say the company wants to push the New Product and get customers off of the old product.  The company decides it wants to differentiate the new product more and build significant new functionality and make sure the architecture can be clean - free of old code, no feature flags, to be unencumbered by the old code.

To facilitate the new architecture a decision is made to split the two products.  They were already each in their own source control where the new product had a reference to the old products libraries to use as core.

Sharing is not caring?
A new source control is created for the new product and the source code from the old product is copied into the new products source control (where it used to have library references for core it now has the full build of source code so that it can diverge from the old code).

Legacy product changes why would we need those?
The initial plan is to issue changes to both source controls when modifications are needed to the once core libraries (but the plan is most changes would only go to the new product source control) for at least 6 months and review then.  (The concern is obviously if there is large divergence for the core libraries increased merge pain and more significant testing is needed.  But if most changes only go to the new product not a big deal).

All seems good in the world.  Except for one thing. more than 90% of the customers still use the Old Product.  A tool was built to upgrade from old product to new product and no customers are interested due to concerns about difference in features in the New Product.

Oh wait we do need legacy changes for a little while?
Next let's imagine the news comes nearly all changes for the next six months will impact both products/source control meaning nearly all work will be duplicated/merged to different source controls.  (Basically bugs need to be fixed in the core libraries impacting both products and need to be fixed in both and both will be supported)

Ok much more ugly but still only 6 months.....

Ummm... ok legacy changes for a "little while" = years?!
Of course only about a month later the news comes that the entire year will need most changes to both products.  (Many folks figure it may be years before customers upgrade and support for both products will remain for some time)

But how can good architecture be bad for now?
In our scenario, it seems like the products would have been better off staying the way they were for a while with new product relying on old products core library as a reference (not copied) until there was at least a well defined (and realistic) time frame to limit the amount of manual or source control driven code copying, merging/ dual testing would be needed.

Of course then the architecture wouldn't be clean and free of flags for the core libraries if they were shared.  But significant duplicated effort would be prevented affecting the work of perhaps dozens or more of staff.  Feature flags, dependency injection and hooks/events could have allowed the core libraries to remain mostly unchanged but allowed all of the functionality that was needed for the New Product, while minimizing the work effort.

This shows that there are a lot of things that can affect architecture decisions, source control plans,  and that sometimes the ideal architecture may not be the best at the moment.

Considering the business objectives can be key, but they must be viewed within the full context of what your company is trying to do and the reality of the situation not what the business hopes to accomplish.  The question is more - what can be realistic accomplished and what are the costs of the change we are making? 

What were the possible Con's of the decision:
In this example the cost would have been that multiple development teams would have their progress slowed initially by maybe only 10-25% but as time went on to possibly double the time for core library changes to fix bugs, and since that was going to be most of the changes for a year or many, that could be a pretty high cost to incur.  Think about if your company could only produce half the features it used to a year.  That would lose opportunity cost and cost up to twice as much to implement the same functionality.

What's even worse is as code diverges it would be harder and harder to revert and change your mind if the plan did not go as expected.


What were the possible Pro's of the decision:
The possibility to get a clean architecture unencumbered by the old code. Possibly produce new features faster since old code could be removed.  Maybe customers might want to upgrade if the features built were good enough and big enough.


What was the likely reality?
Both Pro's and con's... but remember we said 90% of the customers were still on the old product.  So thus 90% of the revenue was coming from the old product customers.  Those customers wanted at least equivalent features to upgrade but they were not there and likely would diverge further.
Developers would have to spend extra time developing the same fixes twice and that was most of the development work.

What you didn't want to work two jobs and produce half the work?
So time was reduced on new features for New Product that were only 10% of the work effort where 90% of the work would have had the effort increased by 10-90% or so over time to duplicate the fixes into both products.

I'll put double my money on/in the red...
Does that seem like money well spent?  Well if your existing customers would only upgrade for new features perhaps.  But if your customers wanted equivalent features and stable code first and new features after that was solved, then spending up to double the money to build fixes sounds problematic.

Don Quixote its a windmill!
  In this example the company wanted customers to move to New Product and they wanted to deprecate old product and make sure New Product had a great architecture.  All very reasonable objectives.  But the architects and groups deciding the split either decided in a vacuum or made unrealistic estimations about being able to execute the plan of not making significant amounts of fixes to the Old Product due to misunderstanding the likelihood of customers to upgrade.

Not sure who said build it and they will come... but it's likely that companies would be much better off analyzing and being realistic about where their money comes from and the reality of their plans to have customers upgrade or move to other products before investing time/money in their software plans.

This plan could have worked if timed right so that customers really would have all been upgraded in 6 months, so perhaps waiting till the point data showed that was clear to likely happen then the cost of this decision could be more easily justified.


Alternative examples:
Now of course there could be reasons to do dual code changes (not seemingly in that scenario) but imagine both products were going to have their own life no upgrade path and we wanted to accelerate both of them.  Split the source controls, copy the code and hire new teams so the team would separate to old and new product teams to allow clean acceleration and direction for both products.
(That could be a valid reason to go that path.)

If as the original scenario mentioned minimal changes would be made to the old product things could be fine utilizing dual code changes only as needed for emergency hotfix's perhaps.

Even if every change needed dual code changes and if they had been limited to 6 months or so dual code changes could be managed probably without too much difficulty knowing there would be light at the end of the 6 month tunnel.

So there clearly are scenarios the original plan could have worked under the right conditions if they existed.








Sunday, October 27, 2019

What it was like to black box reverse engineer Windows on the WABI (windows on unix) project at Unix System Labs.


What it was like to black box reverse engineer Windows on the WABI (windows on unix) project at Unix System Labs.  A copy of the manual is here.

This is my best recollection of of the WABI project.. since it was over 20+ years ago it’s as close as I could recall.

Unix System labs was a great group of engineers passionate about software engineering and definitely inspired me as a developer in the early part of my career. Even though Unix at the time didn’t run on all hardware if you brought in your own pc in and tried to get Unix running on it everyone was super helpful.  There was an email group you could send to with install issues and someone would come running down to your office if you emailed it to help you troubleshoot an issue.  The offices were shared two person offices which was way better than my previous consulting gig that had been cubes or just flat desks sort of open office style. 
Our team was part of the desktop group and we were working as part of a consortium of companies to work on the WABI project.   Sun Microsystems seemed to be the leader developing the core functionality to take the windows binaries and actually use them to run on Unix.  Our team was to develop the windows 3.1 apps you would normally get with a basic install. Things like the Write application, Program Manager, common dialogs dll, and later the control panel. WABI wasn’t supposed to stand for anything at all but internally it was known to sort of mean Windows Application Binary Interface. 

The computer setup we had was interesting, we had 2 computers one windows computer to build our applications on and to use to see how windows really works. (Plus I think this was a common setup for email which I think used something like Groupwise or something like that on windows)  We also had a unix computer to work with for testing.  We had a small team lab of PC computers maybe 10-20.  There was a larger group shared lab that had 2 to 3 times as many as our teams lab.  There was also a language lab that had a machine running different languages.  We used Xwindows  to be able work remotely from the machine even back during this time.  Which was rather new to someone that just worked on windows at the time.   (Of course could use terminal or telnet for remote).  For windows machines we used VNC at the time, not sure if I realized that was invented by AT&T at the time I believe.

One of that apps we worked on was the program manager application (We called ours application manager) it had been mostly implemented by another engineer but needed refinement to match the windows version.  Some of it was getting Dynamic Data Exchange (DDE) to work properly for installation of new apps and the right files/folders to appear.  That took a large amount of testing of installation of apps.  We were targeting the top 50 applications or more for windows at the time to run.
Our way of reverse engineering was only allowed to use publicly accessible information.  No decompiling.  So we had to use public API docs, published books, or visible/api responses to inputs/outputs.  We would just write test apps to call any api’s and try them and see what they returned.  If a commercial app failed to work we would debug and figure out why and test the api’s to see if they worked differently than the public info.  We found lots of behaviors that didn’t match the documented specs.  We got to speak to some authors of windows internals books to clarify what they wrote which was kind of cool.  We would then mimic the actual api behavior or interface as best we could.   One example was program manager had a feature you could use the keyboard to move the cursor on apps within folders even when they weren’t organized by rows and columns hitting the arrow keys would jump to seemingly the closest file to the left (or in whatever direction the arrow was pressed.)  So me and another engineer worked on an algorithm to mimic that behavior as close as we could.  We compared visually the behavior under various arrangements of files and got an algorithm to operate as close as we could to the program manager behavior.
One day we got a call to help look at the control panel app that was being worked on by Sun Microsystems.  They had generated the app using some sort of UI to code tool and the code was so obtuse to be nearly unmaintainable.  It was a monolith with all functionality built in.    We re-implemented it and simulated the public protocol to support control panel applets (plugin dlls) to make the shell not much more than just loading the dlls (cpl files) and break up the functionality into the dll applets. Sun needed it done quick for some reason and we threw most of the framework together for it over the weekend.
We spent long amounts of time testing installs, the common dialogs, the write application so everything worked just like it did in windows 3.1.  Then we would spend weeks code reviewing with the team so we could be ISO 9000 compliant.  We did written formal code walkthroughs.   We would print out the code and hand out packets to all the reviewers sometimes days before hand to allow preparation and review to comment on the paper.  Then the original coder would walk the reviewers through the code and the reviewers would point out issues or ask questions.  We were not allowed to exceed 500 non commented lines in about 2 ½ hours time period (which wasn’t a problem usually since the reviews were quite involved). There were times we spent almost 2 months just doing code reviews.
We spent many, many months getting everything ready to work and then we heard Sun gave in and said we would be required to install windows and users to have a license to windows in order to use WABI.  Some agreement with Microsoft we presumed to prevent a lawsuit.   So effectively almost all of the apps our team had worked on would be overwritten with the real things on a windows install to the unix system (except I think for control panel since that had to control for some specific unix functionalities too.) 

There was kind of a funny story one of the managers had gone on a plane trip I think to California – on the plane he decided to change seats since his seat was all the way in the back.  Later he heard that if he hadn’t changed his seat he would have been sitting right next to Bill Gates.  We always wondered what kind of conversations might have taken place if he hadn’t changed his seat.
There were folks at Unix System labs we were working with that had worked with Kernigan and Richie. As a software engineer that was pretty cool to hear the various engineering stories about things that made computer history.
Unix System Labs was bought by Novell.  The WABI project started to die down a bit.  We were asked to work on NetWare. 
At some point we got asked what about building a 32bit Unix on windows functionality for some secret project Ray Norda was funding so some of our team went out to Palo Alto to meet with them.  We had started building some of the 32bit windows core functionality on our team as a proof of concept and met up to discuss plans.
We didn’t end up working with the team in California much after that though.
Eventually Novell laid off all the former Unix system lab employees at our whole site but got 3 companies to pick up the employees they wanted BEA Systems, SCO and Hewlett Packard.  We all worked in the same building for a while.  Hewlett Packard was so nervous about other companies in the same building/floors they set up guards at hallways and motion alarms on the hallways you had to tap an electronic fob and run down the hallway to not set it off. It was definitely an experience.

Working on Unix with folks was a great engineering experience and I’m glad I got the opportunity with such a great group of engineers it definitely shaped my engineering career going forward to see how an engineering group is run.  Given they also sold source code for Unix every line of code was scrutinized in code reviews to make sure it was up to par with the quality they wanted to produce.

Also one of the things I took away from that team was not only the passion for software but the passion to promote your fellow team members work.  It was common for team members to share interest in others work and promote others work to other teams/managers, etc.  It might just have been something like “Hey did you see what Joe was working on?  He got a new thread management system to work that saves 20% on cpu cycles.”  Everyone was all about helping each other succeed.  So not only was it a great product and project to work on but a great team and environment as well.










Wednesday, October 23, 2019

Samsung Galaxy S10 plus quick review


So far I'm really liking the Samsung Galaxy S10.  I'm coming from a droid turbo 2 that had great battery life.  With the S10 plus and optimizing the battery saving options I'm getting battery life equivalent to the droid turbo 2 I had or better.

Samsung One UI
Seems reasonably fast and does what I need.  A little bit getting used to but seems fine.

Biometrics
I've only used the fingerprint scanner which often doesn't let me in.  See this article for some help with that improving the accuracy.  I didn't use face unlock given it's 2d nature chances are it's not a secure option or as secure as fingerprint should be.  That said there was a report about any fingerprint unlocking an S10 device with certain screen protectors.

Display
Beautiful, bright, probably one of the or the best display out right now at the time of this writing.

Battery
Amazing battery life. Tuned I take my phone off the charger at about 6am and it goes back on the charger at midnight and I usually have 70-75% battery left and have had about 2 hours of screen on time or so.  I'm using night mode with dark wallpaper with battery optimizations enabled.

Size
It's a little big but manageable for my small hands

Performance
The operations and games seem plenty responsive/fast.  I've only tried a couple games but they all seemed to run well.

Phone calls
Voice calls seem clear, and loud enough.  It might be a tiny bit softer than I'm used to but seems fine.

The camera
The front selfie camera cutout is a little weird for most apps it's not distracting given it's up where the status/notification bar is.   The camera's take great pics.  Zoom, wideangle all give you some great options to get more in your pics or closer.  The colors are often vibrant.  In lower light levels the pics aren't quite as good as outdoor well let pics, but still usually good.


Pro's

Great battery life
Great screen
Fast
Should get Android 10 soon
Lots of features/configurations and options
Lots of accessories available for it
Nice camera's/options



Con's
Kind of big
Worried about it easily breaking (but it hasn't yet and it seems built well... but its glass on front and back)
Fingerprint scanner isn't very accurate - doesn't work often and reports of under certain conditions any fingerprint working

One of the coolest things you might not be using on your Samsung Galaxy S10 or even earlier phones Secure Folder


Secure folder is what also has referred to as Knox.  It's a secure container to run android apps in.  It allows you to separate work apps from your home apps on your personal phone.  If your office is like mine they want you to install an app on your phone that allows them to see all of your data and wipe it in case you lose your phone to protect their data on your phone.  With secure folder if you put that app and the other apps you need for work in the secure folder, their apps can only see and delete the files in the secure folder.

This is a great way to separate home and work.

Secure folder could be used for another purpose.  To sandbox applications you want to run but ask for ridiculous permissions they shouldn't need.  Like a calculator app asking for access to your contacts and storage files.  If you only ran that app in your secure folder there would be no contacts for the app to find or files for it to see so granting it that access wouldn't cause any problems.





Tuesday, October 22, 2019

Samsung Galaxy S10 and S10 plus fingerprint accuracy problems improvements/ fixes


Well you've probably heard the stories in the news that the S10 will let anyone in with a fingerprint and the right screen protector.  Mine however doesn't like to let me in much at all.

I did however find ways to improve the accuracy

1. When you add the finger print notice exactly where you are pressing your thumb and try to center your thumb in the same place - when doing it press like you would really do when trying to access your phone.  If you normally do it single handed try to hold your phone exactly the way you would normally pick it up and press your thumb that way

2. Add multiple fingerprints but of the same finger/thumb (I think it allows 4).  This allows for more samples. Do them the same as mentioned above.

3.  When trying to access your phone position your thumb centered precisely how you added your fingerprints in the first place.  Really try to get it in a similar location.

I've noticed finger positioning and multiple fingerprints can help significantly.

Some people have noted that turning on touch sensitivity in Settings->Display-> Touch Sensitivity helped them for me that didn't seem to help.

This was on android 9.


Update: 10/25/19
Just got the security update that fixes the fingerprint reader issue when there is a certain type of screen protector.  It mentioned redoing the fingerprints if you had that type of screen protector.  Since I don't I just installed it and used it.  It actually seems to make the recognition significantly better.   I've only been using it a little in the past few hours but seems improved to be.

Update 11/1/19 fingerprint is a little better with the update sometimes but still not very reliable.  Careful finger positioning helps a lot but still not all the time.


Update: 11/15/19 I had the original screen protector on that got lifted up and bent.  I replaced with another plastic type screen protector and that seems to have actually improved the fingerprint recognition a little.  It still seems hit or miss but with the new protector and good thumb positioning it is better than before.

Monday, May 20, 2019

Watch shows like it was 1996! -You might want to stop binge watching Anime now!

I know in the age of Hulu, Netflix, crunchyroll, funimation, vrv.co you can power through that anime series in a few nights.  Binge watching can be great to catch up on a show or to not have to wait to see the next show. (With Netflix being founded in 1997)

Here's the problem... if you binge watch that show, often times it can take a year or even two before they come out with more shows, and if you wait for dubbed that could be another year or more.  So in the meantime you have to live perhaps several years without getting a new fix of your favorite show.

As I finished a good show I found myself wishing there were more new ones to watch with my favorite characters.

So I decided to take a lesson from the old days...Instead you could view the shows how they were originally intended to be shown...one show once per week. Yes it's old fashioned and slow, but if you would like to savor the show and not have as long between when it's over till when you get new shows (if it still airing) then back to network style may help you.

I've set up my own show schedule I stick to.  Since the Anime episodes are often short around 24 minutes I can fit sometimes 2,3 or 4 different shows a night.  So now a getting through 12 episodes of a show takes 12 weeks instead of a few days.

I didn't know if I could stick with a schedule, I put it into a spreadsheet... surprisingly it's been pretty easy to stick to and I've been happier I'm not finishing my favorite shows so quick.  I look forward to my scheduled day for my favorites... never thought I'd be wishing for the old days of network style tv scheduling again... but I guess it happened.

Hope you enjoy your shows!