OU blog

Personal Blogs

Computer Science - Code Entropy

Visible to anyone in the world
Edited by Stephen Walsh, Wednesday, 12 Jul 2023, 08:21

When software doesn’t evolve or move forward it is considered to be suffering from a curious ailment known as software entropy, or code rot. As the name suggests code rot is the slow deterioration of an application due to the lack of updates or advancements. It’s a phenomenon that exists in many organisations, but is most rampant in banks, airlines and insurance companies, the type of companies, ironically, that claim to be on the cutting edge. Don’t be fooled. At this very moment transactions worth billions of dollars are being processed on antique applications that were written in the days before man walked on the moon. But in a world where technology is moving at breakneck speed, how did this happen? The answer to this question lies in the past and is a curious mix of laziness, bad practices, and lack of vision.

In the 60s and 70s programming was a very different occupation than today. It required not only mental agility but physical agility too. Leading languages of the time only accepted instructions typed on to punch cards that were then manually fed to the computer. Depending on the complexity of the program there could be anything from 500 to over 1000 cards. This required programmers to lug heavy boxes from one terminal to the next whenever they needed the machine to perform a particular function.

Another annoying quirk was these cards had to be inserted into the terminals in a particular sequence or nothing would happen. This nugget of knowledge was especially useful for the office jokers who liked to give the cards a quick shuffle whenever the programmers weren’t looking.

This likely sounds like a nightmare to the modern user or developer. The phrase “more trouble than it’s worth” springs to mind. But these terminals had advantages. They were particularly efficient at retrieving information and performing complicated calculations and could perform these tasks much faster than humans.

While plenty of businesses were calling out for such functionality, it was the banks, airlines and insurance institutions that had the cash to adapt this new technology. This resulted in companies like J. P Morgan and Bank of America scooping up as many programmers and technicians they could find to implement custom programs to improve the productivity of their business.

Thankfully, Over the next few years programming evolved and became much less cumbersome. Advancements in storage and memory now allowed source code to be executed on disk rather than lugged around in heavy boxes. This breakthrough was gladly received by programmers and their strained backs. But the situation was still a long way from being perfect.

COBOL, the most popular computer language of this time, was known to be sturdy and reliable but also needlessly verbose (especially compared to today’s standards). Simple programs like adding two numbers together could take 15 to 20 lines. This might not sound like a big deal, however, for complex applications code could run on for a million lines or more.

Partially due to their size, but also due to the lack of decent software practices, testing and maintaining these types of applications were difficult and tedious. At that time, the emphasis was on getting the job done quickly rather than writing beautiful, well-structured code. This behaviour gave rise to sloppy and inefficient written procedures and subroutines. Multiply these bad habits over 10 or more years and the application begins to resemble a jungle of impenetrable code. A dark and dangerous place that can only be navigated by a certain group of people familiar with the terrain.

All these factors likely impacted on any decision about migrating these dinosaur applications to modern platforms. While everyone agreed something needed to be done, serious questions remained regarding the cost and the hassle any actions would bring. Programmers didn’t relish the idea of rewriting a codebase of such magnitude, and management certainly didn’t relish the idea of writing massive checks for the IT department. As always, the result was to kick the can further down the road and hope money and resources would present themselves in the future. This type of wishful thinking continued for the best part of 30 years. In the meantime, the world moved on: the computer age became the internet age, then the internet age became the mobile age. Unfortunately, no magical money tree sprouted to save the day, and nothing ever changed.

This scenario wasn’t unique. It was replicated in thousands of businesses across the globe. High profile companies like United Airlines, J. P Morgan, even the Pentagon, to this day still have legacy COBOL systems controlling large parts of their operations. These applications still chug along on old relic mainframes, but they are not updated, merely maintained. Down through the years all attempts to upgrade have all ended in failure.

Now, the original developers, the founding fathers if you will, have long retired, taking with them years of knowledge and experience. This has dampened hopes of meaningful migrations ever taking place. Their replacements, new age developers schooled in modern languages of Java and C++, aren’t much help. They look upon the ancient syntax of COBOL like tourists staring at hieroglyphics on a wall in Egypt. The best they can do is conceal the problem from the outside world. They do this by creating fancy web and mobile phone frontends to mask the festering applications beneath. No doubt there are still regular meeting about attempting another update, the result likely being to kick the can a little further down the road. Next year or maybe the year after something will be finally done to eliminate the rot.


Permalink 1 comment (latest comment by Darren Menachem Drapkin, Sunday, 14 May 2023, 20:36)
Share post

Computer Science - Data Modelling thoughts

Visible to anyone in the world
Edited by Stephen Walsh, Friday, 12 May 2023, 13:20

When it comes to data modelling there are two camps at the forefront. Those who consider it a skill and those who consider it a dedicated role. In this debate there is no real winners since the answer lies firmly in the long grass of it all depends. Let me illustrate the point with a story.

A few years ago, my father built a shed out the back garden. With dogged determination he spent weeks laying down brick, mixing cement, and sawing planks of wood in half. To everyone’s surprise he managed to complete it without seriously harming himself. Even more surprising was the resulting structure stood and still stands today, albeit with crooked walls and a leaky roof. Not bad for a man with no building experience and tools that consisted of a measuring tape and rusty saw.

The point of this long-winded narrative is my father knew next to nothing about architecture but had enough to get the job over the line. Believe it or not database projects in the real world are often run in a similar manner. Data modelling is rarely considered a full-time role, instead this responsibility is often dished out to existing personnel. Usually application developers. For small to medium sized projects this is not a problem. Most developers will have data modelling in their tool-belt and will have enough knowledge to do a competent job. Any inconsistencies and inefficiencies at this scale can be easily papered over.

Things get messy, however, when the project grows in complexity. Just like my father’s shed-building ability doesn’t equip him with the skills needed to tackle a skyscraper, a developer might not have a sufficient handle on things to take on an enterprise-sized data modelling job. For large projects security, storage, and reporting all need to be taken into consideration, and this might be a bridge too far for developers. It is at this critical juncture data modelling crosses the threshold of being a useful skill to becoming a full-time role.

Data modelling, for those who are unaware, is the process of designing, building, and documenting the database architecture of a company. It involves mapping objects and behaviors to relevant database counterparts. This is a hugely important step in the development process. Data modelers are essentially the bridge between the business requirements and the technical implementation and as such need to be fluent in multiple database systems as well as being able to interpret business functions at a minute level. Being able to walk this tightrope between the two sides is a rare breed in the IT world.

In recent years the job has become more and more relevant. This is due to the rise of data science and machine learning. Across the globe businesses are becoming increasingly reliant on data for their decision making and making predictions about the future. This had led for need for clean and efficient data that is optimized for reporting and analysis. Expect to see Data Modelling job vacancies over the coming years.


Permalink Add your comment
Share post

Computer Science - Object-Oriented Programming Is a mistake?

Visible to anyone in the world
Edited by Stephen Walsh, Thursday, 23 Feb 2023, 13:28

Object Oriented Programming, also known as OOP, is the king of programming paradigms. For nearly three decades it has been perched upon the throne without a serious challenger in sight. That’s not to say everything has always been peachy. You don’t get to soar to these heights without critics, and recently one such critic wrote that OOP is the worst thing to happen in computer science.

While some programmers said this was nothing but clickbait by an overzealous programmer, others saw it as an ominous sign that the reign of OOP is coming to an end.

So what are the origins of OOP and why is there such a love/hate relationship with it? To discover these answers, we must go back in time.

The 1960s were a time of great political and social change. While the world grappled with war, political assassinations and civil rights, computer scientists were grappling with the problem of structure and scale. Back then programs were written in linear batches that could run on for thousands of lines. With no control of execution without the infamous goto statement, the resulting spaghetti code was difficult to read and almost impossible to debug.

This quandary frustrated the heck out of these scientists, so it was decided something had to be done. Eventually a concept was devised whereby related data and methods were grouped into separate units called classes. This idea was later enhanced by Alan Kay who first coined the phrase Object Oriented Programming. He also went on to define features that would later become staples of OOP such as objects, inheritance, and encapsulation.

But the world was not quite ready for this new style of coding. At the time computers were still the stuff of sci-fi and programming was a mere theoretical topic banded about by researchers and academics. It would take another decade or more before OOP would make the leap into the cutthroat field of competitive development.

Fast forward to the 1980s and computer-mania has gripped the world. Demand for more complex software is growing, along with the need for better development practices to handle the workload. Two of the most popular languages of the era were Objective-C and C++. Both helped companies like Apple and Microsoft create sophisticated and reliable applications and operating systems. What made these languages so powerful was their flexible along with ability to harness all the principles of OOP design. For Alan Kay and the other computer scientists this was validation of their hard work beyond all their imagination.

There was more good news to come. A few years later another language entered the race. Java. In many ways this would be a game changer. With its easy to learn syntax and built-in memory management Java became a popular choice with both software houses and upcoming developers. It was also regarded as one of the first modern object-oriented languages. As a result, if you wanted to learn Java you first had to get a handle on OOP principles.

To keep up and compete with the growing popularity Java and OOP other languages followed suit. Soon the market was awash with language such as C#, Scalia, Swift, Ruby. These were all dynamic and flexible languages with native support for object-oriented programming.

So where does that leave us today? Simply put, OOP is so prevalent in our applications, websites, and operating systems there is no escaping. It has become the de facto solution to all our software design problems. There are no more discussions anymore about paradigm suitability, the decision is already made: OOP. And this decision isn’t made out of appropriateness, it’s made because objects and encapsulation are so entrenched in the psyche of developers, they wouldn’t even dream of considering other options.

The problem here is obvious, and I can’t believe it needs to be stated, but here goes: OOP may not be the right fit for every project. In fact, in some projects it can become a huge distraction. Developers can be drawn into rabbit holes where they try to work out what should and shouldn’t be an object. This is valuable brain power wasted on “abstraction” instead of tackling the real problem.

That said I don’t think OOP is the biggest mistake in computer science. Sure, if used incorrectly it can severely slow down things down. But that’s the fault of developers and system architects who shoehorn OOP into projects where it doesn’t belong. The object-oriented approach still has a lot to offer but it’s not the silver bullet. What we really need is to  teach developers to be more flexible and more willing to take risks rather than just following the prevailing wind. Just because Object-Oriented Programming it’s a sturdy hammer doesn’t mean every problem is a nail.


Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 10586