OU blog

Personal Blogs

Computer Science - Object-Oriented Programming Is a mistake?

Visible to anyone in the world
Edited by Stephen Walsh, Thursday, 23 Feb 2023, 13:28

Object Oriented Programming, also known as OOP, is the king of programming paradigms. For nearly three decades it has been perched upon the throne without a serious challenger in sight. That’s not to say everything has always been peachy. You don’t get to soar to these heights without critics, and recently one such critic wrote that OOP is the worst thing to happen in computer science.

While some programmers said this was nothing but clickbait by an overzealous programmer, others saw it as an ominous sign that the reign of OOP is coming to an end.

So what are the origins of OOP and why is there such a love/hate relationship with it? To discover these answers, we must go back in time.

The 1960s were a time of great political and social change. While the world grappled with war, political assassinations and civil rights, computer scientists were grappling with the problem of structure and scale. Back then programs were written in linear batches that could run on for thousands of lines. With no control of execution without the infamous goto statement, the resulting spaghetti code was difficult to read and almost impossible to debug.

This quandary frustrated the heck out of these scientists, so it was decided something had to be done. Eventually a concept was devised whereby related data and methods were grouped into separate units called classes. This idea was later enhanced by Alan Kay who first coined the phrase Object Oriented Programming. He also went on to define features that would later become staples of OOP such as objects, inheritance, and encapsulation.

But the world was not quite ready for this new style of coding. At the time computers were still the stuff of sci-fi and programming was a mere theoretical topic banded about by researchers and academics. It would take another decade or more before OOP would make the leap into the cutthroat field of competitive development.

Fast forward to the 1980s and computer-mania has gripped the world. Demand for more complex software is growing, along with the need for better development practices to handle the workload. Two of the most popular languages of the era were Objective-C and C++. Both helped companies like Apple and Microsoft create sophisticated and reliable applications and operating systems. What made these languages so powerful was their flexible along with ability to harness all the principles of OOP design. For Alan Kay and the other computer scientists this was validation of their hard work beyond all their imagination.

There was more good news to come. A few years later another language entered the race. Java. In many ways this would be a game changer. With its easy to learn syntax and built-in memory management Java became a popular choice with both software houses and upcoming developers. It was also regarded as one of the first modern object-oriented languages. As a result, if you wanted to learn Java you first had to get a handle on OOP principles.

To keep up and compete with the growing popularity Java and OOP other languages followed suit. Soon the market was awash with language such as C#, Scalia, Swift, Ruby. These were all dynamic and flexible languages with native support for object-oriented programming.

So where does that leave us today? Simply put, OOP is so prevalent in our applications, websites, and operating systems there is no escaping. It has become the de facto solution to all our software design problems. There are no more discussions anymore about paradigm suitability, the decision is already made: OOP. And this decision isn’t made out of appropriateness, it’s made because objects and encapsulation are so entrenched in the psyche of developers, they wouldn’t even dream of considering other options.

The problem here is obvious, and I can’t believe it needs to be stated, but here goes: OOP may not be the right fit for every project. In fact, in some projects it can become a huge distraction. Developers can be drawn into rabbit holes where they try to work out what should and shouldn’t be an object. This is valuable brain power wasted on “abstraction” instead of tackling the real problem.

That said I don’t think OOP is the biggest mistake in computer science. Sure, if used incorrectly it can severely slow down things down. But that’s the fault of developers and system architects who shoehorn OOP into projects where it doesn’t belong. The object-oriented approach still has a lot to offer but it’s not the silver bullet. What we really need is to  teach developers to be more flexible and more willing to take risks rather than just following the prevailing wind. Just because Object-Oriented Programming it’s a sturdy hammer doesn’t mean every problem is a nail.


Permalink Add your comment
Share post

This blog might contain posts that are only visible to logged-in users, or where only logged-in users can comment. If you have an account on the system, please log in for full access.

Total visits to this blog: 10590