Engineering: The unified OOP paradigm

❝Resolving conflicting, divergent notions of OOP with a unified OOP paradigm❞

disclaimer I mention a few names of other people, but please understand that this is my interpretation of their work and the various concepts involved. Furthermore, I am taking quite a leap in this attempt to unify different notions. I might be wrong.

I believe that we cannot master OOP programming languages until we comprehend the (design) capabilities of OOP.

TL;DR Three distinct OOPs, The unified OOP paradigm, Foundation, Conclusions.

In previous blog posts, a series on object-oriented programming, we started exploring object-oriented programming after having encountered not only the educational side of OOP, but also the practical/professional side, and more importantly the conflicting definitions and confusion. On the internet you find conflicts about virtually anything, including a multitude of posts on what OOP “really is” (is meant to be) and which languages enable “real OOP programming”.

I mentioned the multitude of posts on OOP, on what it is and arguing for and against OOP. It would be so convenient to say that only one camp is right and everyone else is wrong, but by the end of the post it will become clear that it is rather a matter of everyone is right in some way and to resolve this we need to align our understandings.


“In science and philosophy, a paradigm (/ˈpærədaɪm/) is a distinct set of concepts or thought patterns, including theories, research methods, postulates, and standards for what constitutes legitimate contributions to a field.” – Wikipedia: Paradigm

Object-Oriented Programming (OOP) is called a paradigm. So, what ideas does OOP actually offer? Virtually everything you see of OOP is in structure. But there is more to it. Structure is of no benefit on its own. Structure is a way, or rather, an attempt at managing complexity. There are discussions on what OOP actually is, how it is meant to be used, how this supposed structure should be applied and what its results should look like.

We previously looked at a proposal for applying the three dimensions of simplicity in programming – rules and guidelines for implementations that satisfy OOP. In this post we will look at clarifying OOP itself and unifying three existing (distinct) notions of “OOP the paradigm”. The clearest example of these dictinct notions of OOP are the various fundamentally different programming languages that all claim to follow the OOP paradigm.

The OOP paradigm is about structure rather than implementation details. It is about design rather than implementation. This post will focus on design and we will look at how we can unify the different notions of OOP by mapping them onto the dimensions of simplicity (or, if you will, complexity). I will attempt to demonstrate how different notions truly may be part of the same paradigm, but with their primary focus on a different dimension. It is important to realize that some basic programming language aspects and practices are out-of-scope, as is procedural programming, which is focused on implementation as well.

Why is this important?

We have been working with OOP for the last few decades, yet we do not fully understand what it means. We have a vast service-industry based on software for which we have not quite figured out how to maintain control. We have many ideas, guidelines and best-practices, most of which are not fully applicable or cannot deliver the envisioned result because the foundation is missing, or rather the foundation is subjective.

OOP right now comes in roughly three different flavors, of which developers will typically only know one. There are many impressive and innovative programming languages that are composed of ideas and mechanisms from other languages and theory from all three flavors of OOP and even other paradigms. Yet, we cannot align the programming languages, paradigms, or ideas.


Setting the direction

First I want to set the direction for this post. This post explores some strong statements that I think have some basis of truth. Or, rather, will define some shared foundation between all of these, seemingly distinct, notions of OOP.

I will do this using a definition of simplicity, as previously defined. In this post I will assume that the definition is true, even though I use this post partly to explore. I am aware of this. As I write this, and while exploring the topic, I am aware that I may force to subject and/or the result. Part of the exploration is to find out whether it is even possible to make sense of it. I also assume that my general understanding of the various OOP flavors is correct, although it will definitely be lacking for subtle (implementation) details.

We will attempt to align the various flavors of OOP. The focus is on design aspects, and we will need to distinguish between (theoretical) principles of OOP and mechanisms that programming languages use to make things work.

Please note that even if you do not subscribe to this notion of simplicity/complexity, it does not matter. The definition is used as a guide. This works, because the same properties are already present. Many ideas are ways to manage software complexity. Similarly, I believe the same properties are prevalent in the various notions of OOP because, regardless of the conflicts in definitions, the underlying goal common to all of them is managing complexity.

Three distinct OOPs

Let’s look at the various notions of OOP. For a long time, I knew about two notions of OOP: one based on class-hierarchy and encapsulation of data, the other a system of communicating processes – or a system of living, communicating cells akin to those in biology as described by Dr. Alan Kay. However, there is also the occasional quotes about “everything is an object”, which is not necessarily captured by either of the two notions.

Everyone who worked with OOP-based programming languages before can tell you that these difference are not just about choosing different words to describe the same thing, but instead are significantly different notions:

Objects providing encapsulation

Many programming languages follow this model to a certain extent: C++, C#, Java just to name a few. They all have roughly the same approach to the design-level structure of the application: a class captures a specific concept in code, as an independent unit. State/data and the corresponding logic are joined together and maintained together. The rules that manage the concept of an individual class are fixed during design. The class can then be used repeatedly, without adding significantly to the state management of the application as a whole.

Classes can use other classes, either through composition or as a basis for derivation, or through implementing an interface/trait. Anyone using the class, may then treat it as this base class or interface as needed. These are all mechanisms that provide different ways of handling the combined data+logic unit, this class.

This is one form of OOP: the central principle is encapsulation. The way a concept, represented by state and logic, is separated into a reusable unit.

From now on, when referring to encapsulation we will mean the notion of OOP based on (primarily) encapsulation. Note that we say “primarily” as, for example, Java’s type-hierarchy is not directly relevant to encapsulation.

Objects, system of communicating processes

a.k.a. a system of interacting processes a.k.a. concurrency.

This notion of OOP is less strongly represented in industry. Erlang is a well-known example. They have the view that an object is an individual, independent, self-sufficient, “living”/self-executing piece of code. In technical terms, it needs to have its own control-flow. Note that this notion does not require everything to be executed simultaneously. It could. However, when considering completely independent units, the way they execute is a minor detail, as your whole design is already aligned this way.

Dr. Alan Kay coined “Object-Oriented Programming”, envisioning a program as a system of small objects that interact in the same way (biological) living cells interact by signaling each-other. It is easy to recognize that this is a significantly different explanation than that of data and corresponding logic managed through encapsulation.

The central characteristic is concurrency, or concurrent design. The way a system is designed as independently functioning units that are expected to interact if and when necessary. Notice that these objects do not have a prescribed size of some kind. The objects can have varying sizes, and the object as described here does not need to correspond to the object from the notion of OOP: encapsulation.

Now, consider this: Go, too, can take this approach. Go allows creating many goroutines. Goroutines are intended to have very minimal overhead, virtually free. During design we can think of goroutines as just a way of designing according to this notion. So we can design an application from the ground up according to a system of (communicating) processes.

“Everything is an object”

This seems the most illusive of the three; maybe because it seems most transparent?

Languages called “pure” OO languages, because everything in them is treated consistently as an object, from primitives such as characters and punctuation, all the way up to whole classes, prototypes, blocks, modules, etc. – Wikipedia: Object-oriented programming - OOP languages

Everything is an object” means being able to treat everything the same way. I almost always encounter this being discussed in the context of OOP and therefore they are always called objects, as the most general name to be given to an arbitrary thing. But then there is the question whether the object, i.e. the composite data structure possibly with methods attached, is itself really relevant.

Smalltalk is a “pure” object-oriented programming language, meaning that, unlike C++ and Java, there is no difference between values which are objects and values which are primitive types. – Wikipedia: Smalltalk - object-oriented programming

Notice how Smalltalk’s Wikipedia-page takes a different angle for the explanation? On the Smalltalk-page it is explained as “every value is okay, either primitive or object”, while the OOP-page says “we will just imagine everything is an object, regardless of whether it actually is”. This is just one such example of the amount of confusion that is present. (One can argue that it is “just a Wikipedia-page, and anyone can update it”. However the issue is not inconsistency, but rather that it is not identified as a problem.)

This characteristic: starting out treating everything as equal, is the cornerstone of dynamic typing. Consider, though, that Python – for example – had dynamic typing before it had a notion of classes, as in use for encapsulation. Python later developed two variants of classes, that differed mostly in their implementation. Only as of Python 3 did it move to a single implementation. This shows how you can have dynamic, flexible typing behavior without encapsulation, or classes/structures as a type, or any kind of hierarchy.

Without explicit support for classes or type-hierarchy, there are still many “primitive” data-types to work with: boolean, char, byte, unsigned integers of various sizes, signed integers of various sizes, floating point numbers of various sizes, enumerations and unions, tuples and composite structures, and then both actual values and pointers or references to values.

Mapping OOP(s) onto dimensions of simplicity

The notions that we have just discussed are significantly different. They offer unique mechanisms for program design, that even require a shift in mindset for the engineer. So, the question is, are these approaches indeed completely unrelated? I had always thought they were. The mechanisms were simply distinct.

My current working theory is this:

“All of these notions of OOP have emerged from a common goal: managing software (design) complexity. This would mean that we can relate these notions to the definition of simplicity.”

I mentioned before that I am working with a definition of simplicity. As defined, it consists of three orthogonal dimensions: expansion/reduction, generalization/specialization, and optimization. Each dimension represents a different aspect of complexity. In the previous post, we used the definition of simplicity to define a set of rules that determine the structure and boundaries for a class, allowing us to evaluate what data and logic does and does not become part of the class. This allows us to create classes of manageable size while respecting the large majority of recommended guidelines and industry best-practices.

Now, we are on the topic of design, so the question becomes: “Can we relate the differences in OOP paradigms to the same definition of simplicity?” If so, what does that mean?

  1. there is common foundation for seemingly very different approaches to OOP,
  2. we can clarify what these differences mean, fundamentally,
  3. we can resolve long-standing questions and confusion that emerge from the confusion between OOP paradigms,
  4. we can qualify how programming languages rate per dimension,
    • allowing for comparing programming languages against the reader’s own subjective impression,
    • allowing comparing maturity/extensiveness of programming languages as a range in each dimension,
    • we can compare programming languages,

This assumes that this definition of simplicity is an acceptable working definition, as it is used as the point of convergence and the basis for this comparison. It does not mean that everything becomes false if this definition is not valid. We use it (merely) as a guide to match with what we already know about OOP. Even if the definition is unacceptable, it does not mean that everything discovered here is invalid.

All OOP programming languages have their strengths and weaknesses. It seems likely that a language is dominant in one dimension with the other two following. Programming languages have the added complication that the language as a whole needs to integrate the three dimensions into a single coherent solution, so design and implementation matter a lot. Sound, well thought-out design choices will result in fewer edge cases and a smoother integration over-all. This, in turn, results in an easier-to-use language with fewer (circumstantial) details to memorize in order to be proficient.


Encapsulation is primarily about reducing complexity with code-structure. Encapsulation separates a piece of state+logic into a separate unit. This reduces the amount of interacting state in the over-all program, because the way the state interacts inside the encapsulated unit is predefined. Having more of these units means less global state and more state consolidated in small parts. This corresponds to the dimension expansion/reduction, where simplicity expects the achievable minimum. Therefore, the central notion of encapsulation as OOP, achieves exactly this property.

We talked in more detail about encapsulation in the previous post, which went into details of what belongs inside and outside the class. It outlined a few rules that guide decision making. The main idea is to strictly enforce encapsulation, to the benefit of all circumstances: the class itself remains small and maintainable, we reduce unnecessary dependencies, everything pushed out is coaxed into a more appropriate place, and guidelines and best-practices are more meaningful, and more easily and more consistently realizable.

So why does this qualify as reduction? Consider if objects did not exist and all fields were managed as a whole. All state is thrown together, and therefore any part of the application, needs to understand exactly which fields are allowed to change given any value for any other state. It must be aware of the graph of data-dependencies. Now, if we isolate 5 of those fields in a class with corresponding logic for managing its state, that logic is the only logic that needs ful awareness of the dependencies between fields. No other logic needs to be aware. The class guards against incorrect use. The class has collapsed the 5 * 4 * 3 * 2 * 1 possible changes to data (assuming no revisits due to feedback or iterations) that might be needed. Any logic using the class only needs to know about the changes they need to perform by calling the appropriate method. Part of the complexity is consolidated in a smaller object. This significantly reduces the complexity of the over-all application: both for single use because the knowledge is isolated, as well as the ease with which the class is reusable and the state is duplicable. It literally reduces the number of fields (“moving parts”) as well as the number of invariants (“rules”) to manage.


Concurrency, the composition of concurrent (interacting) systems, is mapped to another dimension. This may surprise, because it takes a slightly bigger leap. The idea behind concurrency is to model the real world as closely as possible: as a system of smaller, independent, interacting objects. (Let’s call them units for now, to avoid ambiguity with other “objects”.) However, the design is not the final step. Eventually, the system needs to be executed and thus there are all kinds of timing aspects of units that may or may not be executing in parallel, and may or may not have dependencies between them, and may or may not be forced to wait for outside responses, such as storage, networking, etc.

The communication aspect is partially solved through “interaction”, as long as interaction between processes suffices. Another aspect is that of scheduling: the execution needs to be halted for units that are blocking/waiting, such that other units can run. These are consequences to hardware not being as limitless as the concurrent design.

This notion of OOP maps very nicely onto the optimization. To clarify, the concurrent design is the ideal of concurrency-based OOP: the unoptimized variant. It is unoptimized because the developer expresses the program as representative for the real world and has not made any decisions on order of execution or interleaving with its dependencies. The order of execution for a concurrent design is non-deterministic. Therefore, the exact execution is decided at run-time, often in the moment given many circumstances. The optimized variant, on the other hand, is the design that has reduced concurrency or concurrency is removed completely. We effectively descend a level of abstraction by concerning ourselves with a specific (linear) flow of execution. A linear program design contains (implicit) decisions on which parts of logic should run in what order. These decisions might be different if decided at run-time, and may vary for every execution.

Many programming languages, such as C, C++, C#, Java all have limited support for concurrency. Their primary “mode” of programming consists of writing code and calling functions/methods as appropriate. Therefore, programming is essentially prescribing the interleaving of concurrent units, for one or more processors, specifically. The application, written in such programming languages, is a prematurely optimized version of the design fine-tuned for execution. Instead of the original design of many interacting systems, we have (prematurely) collapsed the original design into a version that is more suitable for execution. In most cases, this premature optimization is biased towards very few processing units, for years just 1 CPU. Now, given only the blueprints for a prematurely optimized design, we try to derive the original design for what was supposed to be a concurrent systems. This is difficult, because our source information is riddled with undocumented assumptions, biases and decisions.

As you now see, a concurrent design, i.e. an unoptimized design, is simple. This also demonstrates the difference between easy and simple: the simple system is concurrently designed. However, it is far easier to – deliberately or accidentally – design and implement software as a single control-flow, i.e. a single-threaded program.

Explained in terms of process models

Petri nets allow for modeling state as a set of tokens present at different places in the model. Your current state is your combination of places with a token. This is a concurrent design. Tokens can move independent so far as the model allows, so you may end up with many possible combinations of tokens, i.e. potentially many possible states.

Finite State Machines (FSM), on the other hand, model each possible state as an individual element. Therefore, for many combinations of token-positions in a petri net model, it can massively expand when converted into an automaton. Whereas petri nets are able to model concurrent systems and have actual execution adjusted to circumstances, e.g. ordering and source of events, the collapsed version of the model, the finite state machine, will have (potentially) very many states in an attempt to provide the same flexibility. For (prematurely) optimized designs, you virtually never capture the same flexibility.

Levels of concurrency

There are multiple levels of concurrency and concurrent design, varying in the extent to which the concurrency is realizable.

  1. Concurrent design, user-space (pre-emptively) scheduled (N:M threading model)
    • light-weight threads (fibers, green threads, user-space threads, user-threads, goroutines, greenlets, etc.)
    • user-space scheduling to efficiently enable concurrency that is based on pre-emptive scheduling, as opposed to cooperative scheduling. This enables concurrency without explicitly yielding control.
    • as with OS-level scheduling, will adapt to statuses of user-space threads such as waiting/blocking.
  2. Concurrent design, linear execution, cooperatively scheduled
    • Co-routines, generators, continuations: concurrent design realized into an executable program, using constructs to yield execution to another co-routine. The yield mechanism is supported as a base operation either by the programming language itself or by a (standard) framework.
    • Yielding gives the developer the ability to pick the moment to activate another co-routine. This is a lighter version of fully schedulable and managed user-space threads from previous bullet.
    • The developer is in control and therefore needs to be considerate of everything including the overhead of swapping co-routines. This works well for co-routines with a direct dependence, such as one co-routines that produces values that the other uses.
  3. Concurrent design, system threads only
    A concurrent design that cannot be executed as such. Instead it will be scheduled onto one or more system threads. Execution is less efficient due to overhead and limits of system threads, and scheduling is performed by the operating system. You will be strictly bound by limited resources. Matters, such as concurrent units that are blocked, need to be moved aside as to not exhaust the system threads. The idea of concurrent design is to assume no overhead, and this cannot be realized in execution.
  4. No concurrency, system-threads only
    Parallel execution: a linear design that has some places where many calculations, typically many iterations of the same logic, execute in parallel. The intention is speed up execution.
  5. No concurrency, no threading, no parallel execution, single control-flow
    One big call-stack with all logic accessed through function-calling (subroutines).

As with optimization for implementations, we can optimize design. We previously mentioned that we can reduce/remove concurrency without having a specific target in mind. However, we can also select an execution configuration to use as target. It now becomes possible to reduce concurrency in such a way that the ideal characteristics emerge for exactly that execution configuration.

Let’s say there is an exact number of processors and limited available memory. You can use the number of processors to guide the decisions on which parts need to be running in parallel during execution, and which parts can be interleaved. Knowing about your memory limits means you can decide to run memory-intensive tasks in lock-step such that they will process and hand-over in manageable chunks, to negate or bind the needs of memory-hungry tasks.

Explained by analogy

If you find it difficult to follow my reasoning about optimization, consider the following analog described in the previous post: writing code as normal for that particular programming language would result in simple code: the code is unoptimized. Conversely, if you start writing in assembly and/or low-level (e.g. bitwise) operations, you will need to consider all kinds of hardware-specific details in order to make it work. On top of that, you typically write this type of optimized code with a very specific target in mind, meaning you choose not to support many other potential targets/platforms.

When writing, for example, for cryptographic use in assembly-code, we call it optimization because we hand-tune the code and perform measurements to make sure we have the very best result for that platform. Essentially we do the same when we design without concurrency in mind, except no-one is carefully comparing to the concurrent design or hand-tuning for a particular number of CPUs, or (process) scheduling characteristics, or amount of memory, or architecture, or the performance-ranges of external devices, network, storage, etc. Yet, a concurrent design would have given us this for free at run-time. However, as it is unoptimized, it might not be optimal for the intended use case, but it is also not bound by numerous assumptions of execution parameters.


We have discussed both encapsulation as the “object”-centered idea for reduction, and concurrency as the “object”-centered idea for optimization. So far, the ideas of OOP relate well to the dimensions of simplicity. However, there is one dimension left: generalization/specialization. It is reasonable to assume that this dimension is also represented in the OOP paradigm. Generalization itself is certainly already a known concept in OOP.

The details seem fuzzy, however. “Everything is an object” is one of those quotes that is mentioned in conjunction with OOP. But does that mean:

You might argue that this is stretching the definition. However, note that so far we have managed to explain two notions based on orthogonal dimensions, i.e. dimensions in their own independent range of influence. This means that the third dimension, even if many OOP languages do not offer the full range (yet?), may go into an extreme as well.

Statically-typed languages have the “everything is an object” characteristic to some strictly-controlled extent: they start out most-specialized: the exact type of the object. That is, the strictest side of the spectrum. From that point on, there is usually a mechanism that allows for weaker definitions to apply, to allow for more general handling. However, these are often limited by static typing requirements.

A very large class of programming languages that we only briefly touched upon are the dynamically-typed languages. The identifying characteristic of “dynamic typing” is that the logic you write does not enforce a particular type: starting out, everything is treated as equal. This is the opposite side of the spectrum.

Python is a prevalent example of this. Python, and of course other languages, start “enforcing” expectations, i.e. restricting applicable types, based on whether or not it is possible to use it in ways as required by the logic. If certain fields are read/written to, this implies that this field must exist. If a certain method is called, this implies that this method must exist. This specific type of matching behavior is not present in all of these languages, but it is for Python at least. The specific matching behavior for types is called duck-typing. There are other ways of testing applicability of a type.

Duck-typing: “If it walks like a duck, and it quacks like a duck, then it must be a duck.

Go’s empty interface-type (interface{}) matches with any value, making it effectively a general “catch-all” type. Java’s type-hierarchy is rooted in the parent common to all objects, called java.lang.Object. However, this does not match primitive types, therefore is incomplete.

So it seems that the dimension of generalization/specialization is also present in OOP. Again, not in the classical sense of OOP that most people are familiar with, as was the case for optimization. However, again in a form that is already represented by major languages, and co-exists with the other notions of OOP.

Example: Python’s dynamic typing

Let’s take Python for example. You can equally simply provide a function, generator, data value of various types, class, etc. Python will discover your programming mistake when you use the object to call a method which does not or cannot exist. The example below demonstrates how a trivially defined function accepts string, function and module with equal ease. Function print is sophistocated enough such that it knows what to do when the input is a package. This is a deliberate effort, of course. It has to be implemented this way. However, it is supported nonetheless.

>>> def test(word):
...     print(word)
>>> test("hello")
>>> test(test)
<function test at 0x7f85431318b0>
>>> import math
>>> test(math)
<module 'math' (built-in)>

Python treats every object the same initially, and has a framework for determining any type of characteristic, from indexing to decorators to function calling. It even handles dynamically-located attributes of any kind as you use the attribute.

Python is a clear example of “everything is an object” without requiring the “object” to be (an instance of) a class, while clearly demonstrating that this is very generalized. It defaults to the most general understanding, until specific actions are taken that require deeper inspection. Python takes the approach of “everything is possible, until proven otherwise” where you access arbitrary attributes and it will discover whether or not the attribute is valid. Static typing, on the other hand, takes the approach of knowing beforehand, at compile-time, whether the attribute exists. If it does not exist at compile-time, this is considered an error and compilation will fail.

Do competing ideas exist?

We have explored a mapping for exactly one solution onto one dimension. You might wonder if this seems too convenient. Given that each dimension is independent, there can be an idea per dimension without conflict: an idea that is sufficiently self-contained such that it fits within the dimension. Note that we are still considering ideas based in theory, so the burden of making it work in practice is not yet relevant.

“Can we also explain how there is at most one idea for each dimension? Obviously, one explanation is that I just missed all the others.” More likely, there is a natural tendency to converge into a single idea that satisfies all the concerns of that dimension. As mentioned before, the dimensions of complexity are orthogonal in nature. Multiple ideas for the same dimension may have overlap and would, therefore, need to compete to be viable. All would aim for the same target, that is managing complexity for the one dimension. Regardless of which dimension it is, there is a common goal. Therefore, it is likely that many ideas working from the same basis end up converging.

“Are there no other ideas at all?” There are certainly other ideas, but they compete as a different paradigm altogether. Functional programming is such a paradigm. I have not evaluated functional programming to the same extent as I am doing now with the OOP paradigm. However, I do suspect that this would be equally viable. Functional programming uses different principles for handling complexity, such as functions being first-class citizens and purity/no side-effects. We would need to explore whether it is possible to relate these principles to simplicity/compexity.

The unified OOP paradigm

Having discussed these three variants of OOP, we have seen how different dimensions of simplicity/complexity are represented through notions of OOP. It is evident that many programming languages all roughly follow the same big ideas. However, we never managed to join these big ideas into a single paradigm.

By establishing the true principles of OOP, we make more apparent how programming languages make things work. The unified OOP paradigm finally would allow us to explain Object-Oriented Programming in a shared, straight-forward way that works for all such programming languages, and we can base our guidelines and best-practices on a paradigm that works for all languages. The only considerations then, are whether you are limited by a general lack of support by a programming language or the subtleties of various supporting mechanisms, rather than whether the programming language is “the correct flavor of OOP”.

The original intention of the OOP paradigm does not change. It is still a design paradigm. It is still used to manage complexity. Its “vehicle” to accomplish these goals is still the “object”. We have merely identified which are the true (i.e. more likely) pillars of the paradigm, and which are the supporting mechanisms – that are often specific to the language and its choices. In making this distinction, we have widened the paradigm to include all ideas for managing complexity. This is evident through the number of OOP programming languages that we can incorporate within the unified OOP paradigm. It contains a far wider, possibly complete, spectrum of design complexity.

The consequence of many years of multiple notions of OOP is that the terms used have become unclear and ambiguous. The term – but not necessarily the meaning – is shared among notions of OOP. The next sections discuss the three principles of the unified OOP paradigm. I have attempted to choose new terms or reuse terms to avoid further confusion.

Encapsulation (expansion/reduction)

The matter of “moving parts” and prescribed “rules”. Which data fields are relevant to a concept, both immutable and mutable, as well all the rules concerned in maintaining consistent state. Abstract away all parts specific to the concept into an isolated unit; everything concerning both data and control. Afterwards, the object can be used without awareness or knowledge of its internals.

Encapsulation reduces the over-all complexity of a design by isolating and consolidating individual concepts.

Adaptability (generalization/specialization)

The matter of specialization and any satisfiable generalization. Treat any object as its exact type, or applicable generalization as circumstances prefer. Programming languages default to either specialized for statically-typed languages, or generalized for dynamically-typed languages. Statically-typed languages offer mechanisms to widen applicability, while dynamically-typed languages narrow it down.

Next to defaulting to specialization or generalization for values (variables, parameters, return values, etc.), this is also possible for types (parameter-types, return-types, types for fields, etc.). This is realized, for example through parameterized types/generics. Again, there are multiple variations, increasing levels of sophistication and scope.

note I chose the term ‘adaptability’ to avoid confusion with ‘polymorphism’, which is loaded with a narrower definition. The term is intended to express the ability to adapt to its context, i.e. its varying uses. This nicely reflects the benefits of polymorphism, generics, and also dynamic typing.

Concurrency (optimization)

The matter of designing as a system of interacting entities. Instead of designing a solution as a single, large whole, it is designed with the independence and self-sufficiency that each unit deserves. A system of interacting units is expressed as such in the design. The way the system is implemented and the parts responsible for execution determine how the program eventually executes.

Concurrent designs allow for more flexibility, dynamic adjustment to circumstances and non-determinism. Execution is different depending on many characteristics of the environment, such as the hardware platform, available resources, etc.

Concurrent designs should be considered unoptimized, as these still allow any variation of execution that satisfies the requirements of the design. In the opposite extreme, a concurrent design is translated into a single linear (single-threaded) application. We have optimized for that specific execution configuration, taking all variation out of the equation. Concurrency is about concurrent design, irrespective of execution configuration.

Unified OOP: a convergence of ideas

It is important to realize that the unified OOP paradigm is a convergence of different notions of OOP. The unification does not raise the level of abstraction in itself, nor does it reword or reframe existing principles, nor is one notion of OOP preferred/prioritized over the others, nor are the original principles of the various OOP notions all treated in the same regard. Instead, it explores the many principles of the OOPs and puts them in their proper place.

The unified OOP paradigm, like the dimensions of simplicity, are orthogonal in nature. The orthogonality is important, because it grants freedom to programming languages to implement the principles in whichever way it sees fit. This is evident from the vast number of programming languages and both their significant and subtle differences.

The principles of the unified OOP paradigm are no longer the areas where conflict occurs. Instead, all of the controversial, challenged properties are now (mere) supporting mechanisms. This makes sense, because different languages may choose different solutions. It is up to the programming language to compose their own set of ideas, and consequently the supporting mechanics.

The paradigm prescribes a set of ideas, the programming languages converges these ideas, using a selection of supporting mechanisms, into a workable whole: the programming language: syntax and semantics, but also edge cases, unsound or incomplete features, possibly conflicting mechanisms, inconsistent behavior, backwards-compatibility, compromises, etc.

Supporting mechanisms for OOP

Not all of the notions of OOP share the same identifying characteristics. For example, the OOP most well-known in industry claims three principles: encapsulation, polymorphism and inheritance. We saw in previous sections that different notions sport different strengths, and when we try to unify we need to pick the common principles. But what then is the role of the others? These mechanisms play the supporting role of integrating into the larger whole that is the programming language.

Supporting mechanisms (as I will call them here) are – often language-specific – mechanisms that help to realize one or more of the unified OOP principles. The supporting mechanisms are where the hurting starts, in a way. The core principles, as stated in previous section, are pure ideas that are based on a concept. These ideas can work independently, in theory at least. Things become complicated at the moment these orthogonal ideas need to (seamlessly) integrate and work in unison. Not surprisingly, these mechanisms are also the ones that have alternatives. Depending on the programming language itself, one or another may be more suitable, feel more natural.

Note that integration is often not seamless. Java has iterated over its generics capabilities with major version increments of the language. Go is just now introducing generics but starting out – for the initial version – supporting function-based generics only. Type-based generics, and whichever other improvements and enhancements are possible, will likely be added later.

2022-06-13 Correction: the statement on Go’s generics support (previously) was wrong: it stated that Go restricted itself to type-based generics, however it is reverse: Go has started out supporting function-based generics only, with type-based generics likely to follow later.

It is also important to consider how many intricate details are necessary to make integration work. The more complicated it is to integrate the two, the harder it becomes to make it work seamlessly for all cases without much need for edge cases and special consideration.

Following, are a number of subsections for such supporting mechanisms. This is a listing and may not be interesting to dig through (on the first read). Feel free to skip these subsections and continue with the next section.

The list is not exhaustive or intended to be. Instead, it works both to show the separation between the unified OOP principles (the ideas) and their supporting mechanisms (implementation building blocks), and to give you an impression of the richness of mechanisms, alternatives, in use by various programming languages.

note a reminder, as much to myself as to the reader, that this post investigates the design (complexity) aspects of OOP. Therefore supporting mechanisms are scoped to that topic. Some language aspects such as strong-/weak-typing are irrelevant as these do not contribute to design so much as to implementation.

Data-hiding, access-control vs convention (encapsulation)

Encapsulation is an effective idea that is supported by the programming language down to the syntax level. A well-designed class prevents you from touching attributes that you should not touch. This helps to enforce proper structure that is consistent throughout the application. Not only have you decided that the class manages certain fields, you also express that no-one else should touch these fields, such that a class can protect its invariants.

Data is hidden within the class. Access control enforces proper use. Even more, data is hidden to the extent that users do not need to understand what data is there or how it is modified. The composition of data may even change as long as the use of the class does not change. This creates a significant boundary, a boundary that disconnects two regions of code, as promised by encapsulation. On the other hand, encapsulation can work, even if data is not protected. Merely managing data in separately is sufficient for its benefits. Therefore, merely agreeing not to touch “privileged” data is enough to make work by convention.

Python supports encapsulation through classes. However, unlike many other programming languages, they do not prevent access. Instead, there is a convention among Python programmers that private access is indicated by prefixing the field or method with “_”. Similarly, logic for overloaded operators is present in prescribed methods with similar naming convention: __add__(self, value) for +-operator. Data is hidden in the class, but not protected. Proper use (restraint) is an expectation.

Class-based vs prototype-based (encapsulation)

Another mechanism that is specific to the programming language is the way these encapsulating objects are duplicated for reuse/repeated use. The mechanism most prevalent is that of a class which serves as a blueprint with every instance starting anew from the same memory area and layout, and the same “construction recipe”, i.e. constructor with its initialization-routine. Another mechanism that accomplishes roughly the same is the prototype-based approach. Prototype-based programming takes the approach of duplicating an existing type as if copying the thing as a whole, as it currently exists. Again, these mechanisms compete to be the solution for creating the encapsulating structure.

Properties (encapsulation)

Properties provide access to a field as if accessing the field directly. A property can be implemented to have some backing logic, but behaves like a field: used through reading and assignment. They are essentially method calls shaped as field accesses. There are plenty of programming languages that support properties, such as C#, Pascal/Delphi and Python.

Inheritance, type-hierarchy (adaptability)

We have not yet discussed inheritance in much detail. Inheritance was not one of the three principles of the “unified paradigm”, even though always widely discussed. In my current understanding, this would be a supporting mechanism: some “glue” that converges a type with a mechanism that enables polymorphic capabilities. Inheritance enables this by introducing the hierarchy of types/classes, where each step up the hierarchy means interpreting as a more general type.

Multiple inheritance is a powerful but tricky mechanism that is supported in C++ and with slightly different semantics in Python. Java never touched multiple inheritance for its complications with overlapping definitions/implementations.

Interfaces and traits are a simpler mechanism that enable polymorphic possibilities, while leaving the implementation to the class itself. Implementing multiple interfaces or traits may have the risk of overlapping methods with conflicting signatures, but that’s the extent of it. The supporting mechanism is slimmer, therefore there is less risk/difficulty.

Dynamic dispatch (adaptability)

Dynamic dispatch is the mechanism that determines at run-time which method body need to be executed given a method call. The core problem that is being solved here is that, given a type-hierarchy, methods may be overridden. This means that for any method call, there is one method body out of a number of different method bodies that needs to be executed. From the particular type, which may be the exact type or some more general type, it must be determined which is the actual method body that applies.

Going into some technical details: there is some accounting involved with determining the appropriate body for each method. So, considering what happens when calling a method: it is not sufficient to jump to a method body. First you will need to determine which the method body is that you need to jump to; then you jump. This is not immediately relevant here, but it may be illustrative to what encapsulation accomplishes but also what it introduces. It is an abstraction: method-calls need a look-up before they are executable. It should give some impression of how the type-hierarchy influences the mechanical process of execution.

There are roughly three levels of dynamic dispatch:

  1. the immediate (static or resolved) target of a concrete implementation: no actual dispatching is needed for this case.
  2. polymorphic mechanisms require looking up the right target: dispatched at run-time. (For example, because of the influence of a type-hierarchy.)
    note the Wikipedia-page splits this level of dynamic dispatch into a lighter version in use in C++ and a heavier version that uses “fat pointers” as available in Rust and Go.
  3. dynamic dispatching always required: due to a different syntax or extended language features, there is always a look-up phase involved. Custom logic allows deviating from standard behavior, such as for operator overloading, dynamic behavior for all or just unknown attributes, redirection of calls, or ability to programmatically fail call-execution.

The last variant is most elaborate and is used by, among others, Python and Objective-C. This is sometimes called message-passing, for its extensive, customizable dispatching. Specifically, it is called “message-passing” because an internal method is called with the name of the intended method – or operator – passed in as a parameter. Providing the method name as a parameter provides more flexibility than using the method-calling syntax. (And lacks syntactic safety.) The language requires an additional level of indirection and solves this using internal logic – and possibly alternative syntax – to resolve the indirection.

This is fundamentally different from the notion of message-passing as present in concurrency. This variant is basically “advanced dynamic dispatching”, and behaves like a method-call.

note I originally made the error of thinking that, because of the presence of methods, it would mean that encapsulation is required. However, considering the required behavior carefully, realize that methods must be present but not necessarily on a class or any composite type. Having a type-hierarchy – or any notion of generalization, really – is sufficient requirement for the need to look up the relevant method body.

Structural/nominal/duck typing (adaptability)

The type-system is used – among other ways – to determine when a type is sufficiently similar that a piece of logic is able to use it. This mechanism caters to generalization/specialization. The way that this similarity is determined is supporting in being able to generalize, i.e. use data based on anything other than the exact type.

There are three well-known variants:

This type identifying/matching mechanism is supporting to the extent that it is literally the enabling mechanism for using encapsulating types (objects) in a generalized manner. This is a nice example that demonstrates both that alternatives exist for these supporting mechanisms, and how these support mechanisms become the link between unified OOP principles.

Parametric types, parametric polymorphism, generics (adaptability)

Polymorphism makes it possible to provide any type that satisfies the signature of the expected parameter-type. This makes it possible to request the most-general type for input, making the function most broadly applicable.

When composite types are involved, one would not only want to specify the type of an instance, but also which type(s) are used within its logic, for example when used internally or stored. In addition to setting a minimum bound for what to expect as input, you also specify an exact or minimum bound for the type it can handle itself. The most familiar use case are the container types such as lists, sets, collections, maps/dictionaries. In order to make types themselves “configurable” there are parametric types, a.k.a. generics. These are especially useful for composite types containing methods because it makes it possible to share parametric type information between all necessary methods. Merely repeating parametric type specifications is not sufficient, because the type system must be able to ensure that in all cases, instances of the exact same type are expected, as opposed to of other types, or of the same type but unverifiable.

There are multiple levels of parametric type, varying in sophistication, and multiple alternatives for actually implementing parametric types in a programming language. For example, in Rust impl indicates the language should compile a version of the generic type for each of the parametric types in use, i.e. code-duplication at compile-time. Rust’s dyn on the other hand, tells the language to expect any satisfiable type, making it run-time determined. Java, in addition, knows the concept of type erasure to both support parametric types, and backwards-compatibility for the pre-generics era.

Nominal typing, as mentioned previously, determines types by checking whether a name “applies”, i.e. the named type is implemented. This means that specifying a parametric type, does not yet indicate whether it means:

  1. exactly (only) that type, or
  2. that type or more special, or
  3. that type or more general.

(2) and (3) are possible with parametric polymorphism. Specifying these bounds is necessary for nominal typing, because the naming-mechanism points to exactly one type in the hierarchy. Notice how this is different from polymorphism where specifying a type will automatically allow anything more specialized?

Other typing mechanisms, such as structural typing, are more flexible in this regard. However, these have other costs. Regardless of the name, if the type’s structure matches, it is considered implemented. Go has a practice where you implement a method to “mark” the interface implemented. The method in itself is meaningless. However, everything implements the empty interface – given that there are no requirements – so this is an accepted workaround. Go’s typing mechanism has the benefits of not having the work with a type-hierarchy and bound-specifications for parametric types.

Gradual typing, type-hints (adaptability)

Dynamically typed programming languages have a disadvantage such that as the code-base grows, it proves to be more difficult to keep a complete overview of how each function is used, i.e. what types are put in and what types are returned. Notice that for dynamically typed languages you can freely input multiple types, as long as – during their use – they all share the required attributes (fields, methods, properties, etc.). Therefore, it is possible to provide multiple different concrete types as argument to a parameter. This is convenient but not strict.

Recent developments investigate the use of “type-hints” which allow annotating any arbitrary function with some hints for the interpreter/compiler. These hints, if present, provide additional restrictions. Before, Python programs would generally have significantly more tests which would in turn enforce the type expectations that are critical for correct functioning. With type hints, you can leave the “type checking” to the hints and the compiler/interpreter, and have your unit tests focus on other characteristics, e.g. edge cases, of the values instead. One example of such type-hints are Python’s PEP-484, which allows gradually annotating parts of a Python program with type information.

Gradual typing takes an interesting approach: it assumes general-by-default, then incrementally adds type restrictions.

Cooperative vs pre-emptive multitasking (concurrency)

The notion of multitasking is used for having multiple tasks that run simultaneously. As with many things, there are multiple variants of multitasking. Multitasking operates in user-space. Operating in user-space has a significant benefit in that there is little overhead during execution.

Cooperative multitasking is multitasking where the programmer determines specific points suitable for (voluntarily) yielding control to, potentially, a waiting task. Co-routines are an example that apply cooperative multitasking. One co-routine, at some point requires data from another co-routines. It then yields control to that co-routine, so it gets room to run. The second co-routine runs until a value is produced and then yields such that the first co-routine is again free to pick up where it left off, now with the received value. This is an example of concurrency without parallel execution. Cooperative multitasking is an early model, but also offers advantages due to explicit control.

Pre-emptive multitasking, on the other hand, picks (seemingly) arbitrary interruption points to pre-empt the running task and switch to another task. With a scheduler present, the scheduler can determine when to interrupt the execution of a flow. When interruption is allowed is not completely random, as there are operations that should not be interrupted, however it is not controlled by the developer. Instead, a virtual machine and/or predetermined pre-emption points facilitate this action. Consequently, it is not predictable by the developer when exactly pre-emption happens.

Message-passing, channels (concurrency)

The primary concern of concurrent design is interaction. Without it you essentially have independent processes. Interaction combines small, independent processes into a single intradependent system.

In previous section, we explained the ideas of CSP and the actor model. To make interaction between independent processes possible, there is the idea of message-passing. This allows completely independent processes to interact. Note that, due to their independence, there is non-determinism when considering the exact handling of interactions.

Distinctive properties:

Further down, we will compare message-passing to method-calling, in order to resolve some long-standing misconceptions that claim message-passing would be semantically equal to method-/function-calling.

As mentioned, the idea of concurrency is that by design there are multiple processes functioning independently. By nature of message-passing, it is necessary for message processing to be sufficiently fast. If the receiver is faster, it will starve of work and wait. If the senders are faster, the buffer – through whatever mechanism it is provided – will fill up and provide back-pressure to the sender. Back-pressure is a positive phenomenon that allows you to act upon reaching capacity, i.e. scale up when reaching a limit. (This is an alternative to load-balancing.)

CSP vs Actor model (concurrency)

In describing the idea of concurrent designs, it is clear that we need something to realize this design model. Multiple models have been proposed over the years. Two well-known models are Communicating Sequential Processes (CSP) and the Actor model. These models are quite similar.

The original idea of CSP is envisioned as a pipeline of processes, one passing messages (data) to the next, the message data would be string-based. Later versions relax the idea of the pipeline. The Actor model defines the notion of channels used for communication. Independent actors are able to interact through communication channels. As is the core idea of concurrency, each ‘process’ or ‘actor’ is its own independent, self-sufficient unit with its own control flow.

Misunderstandings and common mistakes

Now, in light of this new information, let’s discuss a few common mistakes and/or misunderstandings. Ideas that have persisted for a while, of which – I am sure – many people have realized that not all pieces fall into place.

Methods are functions

Methods are functions. Calling a method is calling a function then providing the instance as input parameter. Programming languages make things easier by providing support as part of the syntax: simplifying the call itself, and providing related benefits. There are programming languages that expose some of these internals.

Go allows calling methods in two ways. See the following example and observe the function calls in main(). Both calls achieve the same result.

package main

import "os"

func main() {
    var d = Demo{}

type Demo struct {}

func (d Demo) hello() {



Python similarly exposes its internals, giving you the freedom to explore these kinds of details.

The method exists because an object-aware syntax can more precisely guard access to the data (encapsulation), and dynamic dispatch can handle methods being overridden or determine the right method given generalized access through an interface (adaptability). Even though, in terms of calling the executable unit there is little difference, there are other concerns that are validated. However, consider that all of these features are features only if you need encapsulation, i.e. the abstraction it offers. If you do not, it only provides overhead and complication. That is why functions are a more basic feature that apply to other circumstances.

“Functions are not OOP???”

This is a common statement referring to the fact that in OOP everything should be an object. So the use of functions would, supposedly, defeat the whole point of having OOP.

Earlier, we discussed the various notions of OOP that are known. The encapsulation-variant accomplishes abstracting away a concept’s details through hiding state with dedicated logic in separate structure, i.e. a class. Methods are a mechanism with additional privileges, so if the point is to encapsulate a concept, then using normal functions defeats the point of the OOP mechanisms, and functions offer less of the isolation/protection capabilities.

OOP is all about managing complexity in design by adding structure. There are still circumstances where it is just about (re)using some code. It is perfectly fine to capture that as a plain function, or it’s closest representation in some programming languages: static method (without any state).

Method-calling is not message-passing

It is sometimes said that method-calling (in popular OOP programming languages) is message-passing. This is not correct. They are closer to being semantic opposites than equals.

Message-passing is a mechanic introduced to solve the problem of communication for communicating processes as discussed earlier in the OOP notion based on “living cells”/communicating processes. The critical property of message-passing is that it has to deal with concurrent processes: depending on run-time circumstances these may be executing simultaneously. Message-passing enables communication without the need to take locks and/or modify each-other’s memory. Depending on semantics, message-passing may play a role in synchronization or not, such as when two processes are expected to check the communication channel at the same time for a direct hand-over of the message.

“A method call is also known as message passing. It is conceptualized as a message (the name of the method and its input parameters) being passed to the object for dispatch.”

Wikipedia: OOP - Dynamic dispatch/message-passing

“Message passing is ubiquitous in modern computer software.[citation needed] It is used as a way for the objects that make up a program to work with each other and as a means for objects and systems running on different computers (e.g., the Internet) to interact. Message passing may be implemented by various mechanisms, including channels.”

Wikipedia: Message passing

Even the two quotes above describe message-passing using a different context and even a different scale.

The notion of dynamic dispatch is discussed earlier. That section also explains why they decided to call it a “message”. However, this form of messages solve no real problem other than that a level of indirection needs to be resolved that is introduced for generalization. It is a bit of handy logic automatically introduced by the language to hide some of its abstraction.

I also considered – purely speculating – whether the message-passing mechanism might have been optimized away in early days due to circumstances: cooperative scheduling and co-routines in a single-process configuration that would swap one coroutine for another and then back. The idea being that the compiler would recognize that a large part of the process would be reusable, even if a co-routine requires more set-up than merely calling a subroutine. However, this seems less plausible than the fancy naming of “advanced” dynamic dispatching.

So, let us have a look at the characteristics of method-calling and message-passing.


Given threads T_a and T_b, and an object o with a method process().

T_a: o.process() means thread T_a will look up the method for o.process() and execute its body and then resume normal operations with its result. T_b: o.process() means thread T_b will look up the method for o.process() and execute its body and then resume normal operations with its result.

The caller thread immediately performs the method look-ups necessary to traverse the indirections and finds the active, relevant method body. Then execute its logic as part of its own control flow, and subsequently resumes execution with the results.


Given threads T_a and T_b, and a thread T_o and object o which is known to accept messages regarding process-something.

T_a: sends message to o with process-something. T_b: sends message to o with process-something. T_o: processes one message, then processes the following message. Which message arrives first may vary depending on circumstances of execution as there are three independently functioning threads.

The sender thread sends a message, then either waits for a result to arrive or continues processing other logic. The receiver thread (wakes up if it was waiting) takes the next message. It executes whichever logic is needed to satisfy the request in the message, then replies with the result, or a failure. In the mean time, other messages may arrive but have to wait their turn: messages are taken from queue (in order). In mean time, sender thread: may be doing something else or waiting for the result, which is of no concern to receiver.

Please note that it is certainly possible to know the ordering and which method is executes, in the case of method-calling. There have been rumors that this is unknowable for e.g. for a Java program. This is false. The inherent non-determinism referred to, is only present when concurrency is involved. This is because these threads each operate concurrently, so you can never know exactly how its execution will go. Even if only due to outside interference by the operating system.

Java, C, C++, C#, etc.

These languages do not do message-passing. The method call looks like a method call and is semantically a method call.


Go supports concurrency through its goroutines. It has a notion of channels, called channels, which do exactly what message-passing is. You get to choose yourself: a basic method-call or set up (multiple, typically) goroutines and let them communicate through channels. Go offers support as part of the language’s syntax, so it is part of the language itself, rather than an external library or framework.

Objective-C and Smalltalk

“Messages” are compiled into method calls that resolve indirection. The thing that Objective-C does differently, is that this internal translation allows built-in some ability to customize the method lookup, or proxy the method call to another instance, or decide to not handle the call. Because of this difference, it naturally supports operator overloading. This is effectively similar to the capabilities of Python. It is not message-passing as described here.

Smalltalk does not do message-passing as concurrency is concerned. It does support dynamic dispatching, and similar to Objective-C, offers additional flexibility in its method calling semantics.

Erlang and Newsqueak

Erlang does indeed offer true message-passing. The blog post explains certain key points of message-passing, such as order, signals of various formats – among which are messages, the interplay of various independent processes and how some guarantees are lost when multiple different senders are involved, and the way sending and receiving work. It even mentions some lost guarantees when networking is involved due to a chance to lose connectivity. It makes sure to point out how sending and receiving are independent operations and that processes are not instantly aware of each-others actions.

Newsqueak similarly offers true message-passing and independent processes. As documented in the presentation “Advanced Topics in Programming Languages: Concurrency/message passing Newsqueak”. This was likely also a source of inspiration for Go.

Concurrency vs parallelism

This is primarily about how you express the solution during the design-stage vs how it is implemented during the implementation-stage. Concurrency, i.e. concurrent designs, offer a way to express a system of independent units exactly as such. When executed, concurrent units run – either simultaneously or in sequence – whichever is suitable given circumstances and platform.

Concurrent units are either pre-arranged at compile-time, because dependencies (interoperating units or dependency on other hardware or systems) inform the compiler that there is already a sensible way to arrange this, or it is determined in the moment at run-time. Parallelism, on the other hand, is about having a linear procedural body of code, and at some point you realize you have to repeat some logic a lot of times, so if you can do – for example – eight iterations of this logic simultaneously, you – ideally – increase throughput and reduce your execution time eight-fold.

Concurrency is about designing your solution differently, while parallelism is about executing simultaneously, i.e. in parallel. These are at different stages and are about different concerns, giving different benefits.

Is OOP a mistake?

In the beginning we mentioned that there are many conflicting views and valuations of the OOP paradigm. Many differences are based on language-specific aspects or alternative explanations of leading ideas. Unfortunately, years of discussing, documenting and arguing have not resolved these issues. It is clear that there are real concerns.

(subjective) OOP is not a mistake. Nor are the blog posts that argue for it. Nor are the blog posts that argue against it. Basically, the level of confusing information and perpetuating myths is significant. Many talk about different things all the way through. Many concerns raised, as well as some perceived benefits were due to difference in interpretation, and the criticisms valid but given certain assumptions. I even intentionally avoid calling it “misinterpretation”, because that would be wrong. How can you “misinterpret” if there are (seemingly) no consistent explanations at all?

OOP needs less mysticism, as happens when multiple notions of OOP get mixed up, and more practical transparency, such that we don’t fall for the misinformation and confusion. The paradigm is applied worldwide, it is time to refine our understanding.

The OOP paradigm space

In a way, we have described the space defined by the OOP paradigm. The “hole” that many programming languages are already attempting to fill. Earlier, I intuited that within one dimension ideas would need to compete. We see a similar pattern with programming languages. Programming languages are extending into areas where they are still weak or underrepresented. They tend to have an increasingly overlapping featureset. The real choices are in the right flavor of features and the right level of control for the intended purpose.

Java is developing some new capabilities that perfectly fit the pattern: Valhalla brings support for value-types, as Java was originally designed to work with composite objects through pointers and heap memory. Loom, more or less introduces concurrency. These projects significantly change the implementation of the language.

Python introduces gradual typing, in a way spreading further in adaptability. Python has had an (early) presence in concurrency, with its generators and multitasking. Parallel execution (but not concurrent design) was once held back by a design choice a decade ago and has since seen improvements. However, it shows how concurrency is more than parallel execution.

Go, in the way I remember it, has had a balanced introduction: all three principles supported to some extent from the start and improving across the board. Along the way, it has added support for generics, improved support for concurrency, and many other details not directly influential to the design itself. Go – or rather the community – has been inventing itself in terms of conventions for those parts of the language where it does not provide syntax. Similarly, at the introduction there was a heavy push towards use of channels and goroutines. This proved a double-edged sword, as it helped to spread awareness about concurrency, but at the cost of excessively framing challenges/problems as solvable with concurrency. This was unintentional.

Rust, during its pre-stable (0.x) times, dropped support for concurrency to focus on other aspects. Introducing concurrency too early proved difficult. Rust’s strength is on the ownership and borrowing semantics, which take concerns of encapsulation to a new level of detail. Concurrency is reintroduced with version 1.39.0. Note that the ownership semantics are more than just a design-aspect, but an important part is making things work given the design.

The OOP paradigm as described, must surely have lived intuitively in the minds of many people. To a larger extent in programming language designers, who have thought “we can improve our language in the area of …”. To a lesser extent in developers, who would see a feature of some other language, knowing it is useful but often not critical; merely something that “would have been nice” but can be worked around. In a way I wrote this post to highlight the principles: they each bring their own unique space of ideas, each requiring its own mindset, that languages may or may not be able to offer yet.

Simple is not easy

Simple is not the same as easy. Regardless of whether you follow the definition from my earlier post or you go with your intuitive notion, there is a common understanding that something “simple” does not mean that there is no depth to understanding it or that you can achieve the end result within the blink of an eye.

The same holds for the OOP paradigm, and people have found this out time and time again. Whether you have chosen the correct abstractions (encapsulation) for your types and how crippling it can be if you do not, or the trickiness and benefits of designing systems concurrently. None of these things are plain easy. It requires insight, trial and error, knowledge of the intricacies involved. You build up intuition through experience: by trying things, and discover the characteristics of certain solutions either through failure or (partial) success. That, in turn, makes it easier to predict results and anticipate circumstances. Interestingly, each of the principles of the OOP paradigm brings its own, unique class of challenges; each requiring a unique mindset.

With the “simple solution” you gain benefits such as automatic optimizations (compiler/JIT), transparent execution of concurrent systems on however many processors, and more. There are many benefits.

Simplicity forced or biased?

In the posts that explore the definition of simplicity, I mention that there is possible bias in how the definition came to be. The definition has some roots in my exploration of OOP, but in the construction of classes. In the posts on the definition, I have explored a few different cases of applicability, even outside of the field of engineering.

I cannot dismiss the possibility that there is bias, however there is the interesting matter of applying it now for the second time in software engineering, this time on a different level of abstraction: design.

First for the small-scoped “implementation-stage” concerning classes, and now for larger-scoped design concerns. Similarly, both the ease with which the dimensions of complexity map onto the variations of OOP, as well as the fact that exactly the disputed part (type-hierarchy vs. structural typing to determine the type match for generalization) is not a principal component but rather a supporting mechanism used in realizing principal components reduction/expansion and specialization/generalization. Not to mention that the stringent advice to prefer composition over inheritance exactly matches this. All of this contributes to its validity.

Another matter that confirms this representation of the OOP paradigm, is the realization that concurrent design/concurrency is the simpler solution. This aligns with experiences with petri nets/state-machines, as well as recent movements towards concurrent designs, as well as the idea of co-routines, and solutions such as python’s generators. The ability the design with many control flows while leaving the execution to figure out how to make everything run concurrently, regardless of whether it is actually executed on multiple processors/threads.

Furthermore, the unified OOP paradigm enables us to categorize and evaluate many languages, even ones considered rather more exotic, that all claim to be OOP-based programming languages.

Finally, it is important to realize that the definition of simplicity is identified as an emergent pattern. The dimensions allowed us to look for specific characteristics among all definitions of OOP. The definition of simplicity to help us identify already existing characteristics as important, rather than to invent new characteristics or pick at random.


The unified OOP paradigm is an attempt to establish a (shared) foundation to which all OOP-like programming languages can relate. The foundation creates a common ground for many programming languages and various notions of OOP: to understand, to build upon, and to share knowledge – even among languages.


This post is a way of exploring the OOP paradigm(s) through the applicability of the definition of simplicity, as stated at the beginning. However, even if you will not accept this definition at all, the unified OOP paradigm could still hold. This definition was used as a guide: to find ideas that follow the same characteristics, potentially principles.

The “true value” of the unified OOP paradigm is in the formulation of three identifying principles that are based in theory, and the ability to explain and relate many (often incomplete) programming language features. I have tried to demonstrate my claim that OOP programming languages of many different flavors all converge to these three principles.

This post is a convergence of multiple OOP paradigms into the one paradigm it always claimed to be. It is not meant to argue for or against OOP, as explained in an earlier section. Instead, it takes a step back and (tries to) resolve multiple decades of discussion based on unclear definitions of OOP. The unified OOP paradigm is named such to emphasize the convergence of multiple OOP notions and many programming languages with deviating characteristics. The unified OOP paradigm captures the strengths of three different notions of OOP as its principles. Each principle sufficiently different that it requires its own mindset, though not surprising as each tackles a distinct problem. The paradigm explains the principles in more detail and how the many mechanics of OOP programming languages play supporting roles for either of the three principles.

For a more complete picture it helps to read The “minimal-objects” approach to OOP which is written with roughly the same ideas in mind, in an attempt to separate the implementation-level details, that are prevalent in many guidelines and best-practices of encapsulation-based OOP programming languages, and as an attempt to clear up some of the OOP myths present.

Open questions


These references are also present in-place throughout the post. The final post in the series will include many more references that were used over the past years. Those references were more influential for the over-all idea. Wikipedia-articles are used as a quick-reference for confirmation, rather than an authoritative (single) source.

Following are references used in this article. There are also shared references.


This article will receive updates, if necessary.

This post is part of the Simplicity in engineering series.
Other posts in this series: