The Layered Architecture (3-tiered, n-tier or multitier architecture) is one of the most known and used concepts in Enterprise Development. It is the de-facto standard for building applications, so much so that it would be hard to find a single application in the enterprise software realm that does not conform to it.
Many things changed however since the inception of this architecture pattern, there are new ways to organize code, new ways to organize teams and operate software.
In light of these changes it is time to re-evaluate the Layered Architecture.
Layered Architecture

The basic idea of the Layered Architecture is to split the application into at least three distinct areas of code: Presentation, Business Logic and Persistence, where Presentation can access Business Logic and Business Logic code can access Persistence, but not the other way around. Conceptually it looks like this:

Because of how this is organized, we call the individual parts “layers“.
Why is it so popular?
Mindset

It is easy for us developers to think of an application in terms of different technologies and to distinguish between functions based on what technology it involves. Although we “know” that the business logic is important, we tend to focus on the technology like architectures, paradigms, patterns, etc. We don’t really want to understand how the business works, because it’s (let’s be honest) less fun. You have to talk to people, understand and consolidate different viewpoints, and come up with a model that everybody understands. It’s messy. So, technology tends to dominate software development.
The Layered Architecture reflects this mindset perfectly. It decomposes the application based on technical details like presentation and persistence. It creates a separate and well compartmentalized place for “business logic”, the spooky thing we don’t really like to think about. We sometimes even invent specific rules for ourselves like presentation-, or persistence-agnosticism, so the business logic can’t “ruin” our otherwise perfect design.
This is in stark contrast to the insights of the early software development movement that software developers need to be domain experts and actually understand what they build and why. It is in contrast to today’s trends like DevOps too, in which the team’s responsibility isn’t just technical, it’s also to actually operate and support a business function in it’s entirety.
Distributed Applications
When modern enterprise development took off in the 90s, it was quite normal to develop “everything” the enterprise needed in one or at most a few monolithic applications. The Waterfall Model was the de-facto process for software development.
Of course the development team necessary for such projects became too big eventually to work properly, so people thought about how to scale organizationally. Having the mindset described above, they came up with the idea of splitting up the application based on technology. There was a “frontend” team, a “backend” team, maybe even a “middleware” or “database” team.
In this setting the Layered Architecture fits very well, because the interfaces between the layers could be more or less changed to remote calls. So the different “horizontal slices” of the application could be “independently” developed.
This thinking was reflected in the early Java Enterprise versions too, where they expected the Business Logic (the EJBs) to be accessed by remote clients, like remote web interfaces or standalone applications.
This didn’t work. For one, the Web turned out to be much more than just one of the remote clients. The other problem was that nobody really distributed their applications. For scaling it was less than optimal and it was a big hassle and performance hit. Today, even Enterprise Java forgot about remote calls, plain local object injection (CDI) replaced heavyweight remote-capable EJBs. New trends like Domain-Driven Design and Microservices advocate splitting applications vertically instead of horizontally. And new types of development processes and organization, like cross-functional and DevOps teams support this vertical slicing and scaling much more efficiently.
Components and Reuseability
When Java Enterprise first came out, the basic idea was that it would create a platform of sorts, where different vendors would create security-, persistence- and presentation- agnostic components which can be dropped into any application implemented on the platform.
One example was the “ShoppingCart” bean. If you wanted to implement a Web-Shop, you would just download an already existing “ShoppingCart” bean from some provider, configure it, and it would be ready to be used in your application.

Needless to say, this didn’t happen. The promise of reusable components, just as the idea of reusable business logic across applications didn’t turn out to be practical. Modern trends reflect this insight well. The microservices approach suggests that instead of reusing code, we should separate things and make them easily replaceable. Domain-Driven Design‘s Bounded Context concept says the same, that there should be clearly separated contexts which create semantic boundaries, which in turn make sharing “business code” among contexts unnecessary and unwelcome by definition.
Separation of Concerns
One of the more practical arguments for a layered design is, that it creates a Separation of Concerns. It means that the business logic to transfer money from one account to the other should not concern itself with what color it will have on the Web GUI.
This argument sounds reasonable on the surface, however it implies more than what it says. It is almost always interpreted as all presentation-related logic should be separated out of “business logic”. Not just colors and font-sizes, but everything. The “business logic” should be “pure”, and should not know anything about presentation
This obviously does not reflect reality, as business-related things do tend to have a UI and do tend to have persistence. An Account
, Amount
, Transfer
, etc. does need to be presented in addition to fulfilling other functions. How does Layered Architecture address this apparent conflict? It doesn’t really. It usually uses anemic objects to push the data of an Account
, Amount
, etc. to other layers, so it can be presented or used. It smears business related knowledge all over the application, because everything has to understand the data for themselves. The Presentation needs to understand what an account is, how to ask for it from the user, how to create the object and with what parameters. Repeat for all other layers.
Practical considerations
Architecture and Design
The architecture of any software should be directly driven by the requirements, and the resulting design should reflect the business domain and structure thereof.
This point is so important it bears repeating. The architecture of software should mimic the natural structure of the business requirements. These requirements may of course include some technical ones as well, like integration to other systems, performance requirements or non-functional requirements in general.
A Layered Architecture does not reflect the requirements however. It is a purely technical structure, which breaks up cohesive functional units into at least 3 distinct pieces for the corresponding layers. This is a great cost and it only makes sense to pay it if there is a very big gain to offset it.
Reverse Semantic Dependencies
The Layered Architecture demands that dependencies run only one way. The Business Logic knows nothing about Presentation, the Persistence knows absolutely nothing about the Business Logic.
Upon closer inspection this statement can not be true. The Business Logic defines the data the Presentation receives, usually in the form of Data Transfer Objects, which are pure data structures. Any modification on the Presentation side other than trivial color changes will need additional (or less) data or data that is structured differently, or paged differently, etc. In other words the Business Logic will respond to Presentation changes. Although there will be no physical dependency seen in code, there will be an invisible semantic dependency that runs from Presentation to Business Logic.
The same with Persistence. The Business Logic can only use things which the Persistence Layer offers. So if a new query is needed or a new update statement for a new use-case, the usual approach is to just implement it in the Persistence and use it in the Business Logic Layer. Again, no physical dependencies are there, but there are always changes in Persistence happening because of Business Logic changes or needs.

This is why most Enterprise architects and developers don’t feel comfortable having “logic” in the database or exploiting all the features the database could provide. It amplifies the dissonance in the software’s design, the inherent conflict between the architecture and the actual functionality it is supposed to support. This separation of technologies simply doesn’t allow the database to be smart, because all the “smartness” needs to be in the Business Logic.
It makes the software unmaintainable

The Layered Architecture, by localizing technology aspects, must almost by definition spread out the business aspects. This is great if you have more technology-related change requests, and less business-related ones. For most enterprise projects out there however it’s clearly the other way around.
If you want to change an Account
, for example introduce a “known/unknown”-flag or change the account number to an IBAN, or even support a different Account
type, you’ll have to hunt down and change each and every piece of code that receives any data associated with the Account
. You’ll have to change the “UI” of the Account
for sure, the Business Logic how to handle it, and of course the Persistence to store it. You will very likely have changes in all layers. This amount of work seems to be surprisingly large, when the actual change only seems to involve a single business concept.
Everyone who had the unfortunate task to “simply add a new field to this page” in such an architecture already knows how difficult this task can be.
It breaks Object-Orientation
This point is somewhat redundant and maybe theoretical, but is worth mentioning. The Layered Architecture breaks almost all rules and idioms of Object-Orientation. Here are just a few:
- Encapsulation: Encapsulation does not survive crossing layers, because the interfaces between layers are defined in terms of data.
- Abstraction: There is very little to no abstraction, because every layer has to understand all concepts nearly equally.
- Cohesion and Coupling: Cohesive parts of the same “thing” are broken up because of the potentially differing technologies involved. So it makes the code less cohesive and more coupled.
- Law of Demeter: Access to data, using DTOs for example, almost always leads directly to LoD violations.
- Tell don’t Ask: Objects don’t get told what to do in the Layered Architecture, they are asked for data and then things happen with that data somewhere else out of the control of the object producing or holding the data.
Summary
We all have at some point or another worked on, or even designed software with the Layered Architecture. Things change however, and it is time to realize that the Layered Architecture is perhaps not as applicable as previously thought and perhaps it no longer deserves the status of de-facto standard for Enterprise software designs.
What are the alternatives then? Although there are many alternative patterns, there should be no grand design to rule them all. Each design should reflect the requirements of the particular software and should be built following the concepts and terminology of the business.
This is a very interesting subject. Too bad there is very little written about it (or I just don’t know where to look).
It would be great to read an article (or see an example) about how to properly write an enterprise application without this artificial layering, where data doesn’t need to be remapped from one layer to another and where implementing a single change would require code modifications only in one place. I get the point that the design would differ in different domains and for different business requirements, but perhaps it would be possible to take some example of a particular domain / business requirements, and then to write about how it should be implemented, including presentation and persistence.
LikeLiked by 1 person
Thanks for your comment, and great idea about a demo of some sorts.
As you said, it would be difficult to pick a good example because the solution is dependent on the actual business-case. I was thinking maybe the Spring Petclinic, or JEE PetStore if it’s still around somewhere. Even then, people might argue that those are just technology showcases and not actual recommendations how to build an application in real projects.
Also, the test for a good architecture is how maintainable it is after/during it’s built. So the test would be to implement some “real” changes. To have “real” changes is difficult if the project is not actual real.
Do you have some ideas in this area perhaps?
LikeLike
Yes, choosing a good example is probably the most tricky part. The best I can think up at the moment is the Clean Code Case Study project you reviewed some time ago. It’s small and yet large enough to be used as a “reference architecture” project. Although a real project with live pull requests would probably be better. People don’t usually put real enterprise applications to open source though.
LikeLike
Hi, great post! Doesn’t it make testing harder? I’d love to see and explore some simple project implemented following this idea and having persistence and presentation parts. Is there anything on github you can recommend?
LikeLike
You are right, it would probably help to actually show an alternative design. Unfortunately I did not yet come across a suitable public project on github or elsewhere.
Again, the point for me would be to have actual change-requests in addition to having an as-is state of a project, to be able to compare how the two architectures hold up to changes, because that is where the differences show.
I’m open to suggestions, if you have any.
LikeLike
Thanks for another great post.
This resonates with my experience as well.
The first system I worked on had layers separated into Maven modules (model, business-logic, webapp). It worked well at first – the structure was simple and easy to explain. The app was successful and as the codebase grew, it became obvious that this approach did not scale.
Back then I read about package-by-feature instead of package-by-layer approach [1], it made sense, and so we started moving in that direction. First, we’d simply move all classes related to a given feature into its own package, without changing them. This opened the door for further improvements and simplifications (like leveraging package-private visibility, removing some unnecessary mapping between layers, etc.), but I no longer recall which of those materialized while I was with the company.
One thing was clear – dropping packaging by layer undoubtedly improved the system.
[1]: “Package by feature, not layer” (http://www.javapractices.com/topic/TopicAction.do?Id=205)
LikeLiked by 1 person
First of all, thanks for your article. You have many good thoughts. However I have to add my comments. I have observed, every couple of years someone will come up and question the multi tier architecture and specifically attack it on the ground of how it is incompatible with the properties of OO design such as encapsulation, inheritence, polymorphism. Ofcourse that is not true as I will explain right below.
If you follow principles such as Separation of Concerns you naturally end up with a Service Oriented architecture that uses anemic domain models. Because of this I will also and already claim that such architecture is optimal as it is an outcome of evolution. Such an architecture can and most certainly will use all of the properties of OO, however, not all 3 of them at any single time.
Service components will be comprised of separate interface and implementation components. That means they employ virtual methods and the polymorphism property of OO. The will also use encapsulation implicitly since the interface will typicaly not provide access to the internal structures of the implementation component. Sometimes services will also use inheritance for sharing common code, though that is also accomplised more commonly through composition of separate services.
Data components on the other hand will use mostly encapsulation and inheritance and more seldom polymorphism. They will typical use the language encapsulation features to allow controlled access to their data members through getters and setters and inherit from some base data structure. Virtual methods and polymorphism does not have much use in these objects other than few core methods such as toString which deal with pure data. And that is the key you have to understand about this class of components. Their purpose is to be as pure as possible because that is the only way that they can be reused across as many semantic context as possible, exactly as you described in your article.
These two classes, service and data components, correspond to the interface and type sections of a WSDL. These two types of components are separate and distinct because of 2 reasons. Architectualy that is the best approach to reuse. Technically the belong to different scopes. I could elaborate on that but I won’t do so in here.
So there you have it. Multi tier SOA architecure is compatible with OO and it does use all of its properties. It does so however where ever and when ever it is needed. Being OO or anything oriented for that matter does not mean you have to blindly use all concepts at all times.
LikeLike
Thank you for your detailed rebuttal. I wish I’ve read your comment here instead over at DZone 🙂
I won’t repeat what I’ve written there, but I would like to know whether you have any arguments to the practicality of the layered architecture. Do you see the practical problems I describe? Do you concede that when changing business logic you often have the case where what you’re doing concerns multiple layers, i.e. things that change together aren’t actually together?
LikeLike
Hi again. I also replied to you over at dzone. I have shared your considerations some times through out my years. I tried to work with design approaches such as DDD and I have come to the conclusion that after all having been said and done your architecture will end up in a variation of a service oriented design. Because it is the optimal way to structure a multi tiered solution. In what appears to be in contrast to a strict OO interpretation, behavior and data do not have be always tied together.
The best example of this is the failed design of the EJB 2 entity beans. An entity bean represents a business model. Typically models have to be persistent and EJB 2 entity beans tried to support this behavior. That looked fine from a OO point of view but not from a Separation of Concerns point of view. In order to support this behavior over the wire, each EJB 2 Entity Bean was in fact a RMI stub with its own RMI connection to the backend server. You can imagine how badly that scaled when in a typical screen of a typical application you are dealing with hundreds of entities at any time (think a simple data table).
What happened was that entities have narrow scope. Most of the times they are request scoped. They also processed in large numbers, they can typically scale up to millions of records in the database and up to hundreds at a single use case. Services on the other hand have singleton scope – or some variation of a managed singleton scope. That is how they are implemented in most frameworks (Spring, JEE). From this I believe you can follow that it is natural that services are well fitted for building gateways to remote behavior and entities are better fitted for representing exchanged data between two independent points of communication.
So my conclusion is that the application of SOLID methodology leads to anemic designs. That is because separated data from behavior provides the most opportunities for reuse architecturally and optimal implementations technically. However each of the separated parts (services as gateways to behavior & entities as exchanged data) can and does use OO in what ever level is practically useful.
Oh and finally, this is my personal point of view from the experience I have gained. I do not claim to posses the absolute truth 🙂
Regards
LikeLike
@Δημήτριος Μενούνος
Every time someone mentions “SOLID” I cringe. All it is is Mr. Martin rebranding of existing OOP concepts in order to sell books.
LikeLiked by 1 person
I know what you mean, although I actually have no problems with rebranding and promoting existing concepts to sell books. Anything that promotes good practices is welcome and if that makes somebody money, good for him/her.
My problem is that SOLID in practice has become exactly the opposite of what it should be. If SRP would promote proper coupling and cohesion it would be fine, but more often than not it is used for the exact opposite effect, to separate data from behavior, which is not and never has been an OO thing. SRP became a tool people use to justify the regression into procedural programming.
I don’t know when this happened, or whether Mr. Martin intended it this way, but it is definitely pushing us in the wrong direction.
LikeLike
But has Mr. Martin ever been an OO thinker and/or practitioner? He routinely conflates an OO language for doing OOP itself. Just take a look at what he has to say about some of about encapsulation in Clean Architecture in his chapter about OOP:
“Java and C# simply abolished the header/implementation split altogether, thereby weakening encapsulation… In these languages, it is impossible to separate the declaration and definition of a class. For these reasons, it is difficult to accept that 00 depends on strong encapsulation.”
What? OO does not need encapsulation? Indeed this is his way of thinking. Check out the image in the example further down:
https://ibb.co/frrghVM
“This means that the UI and the database can be plugins to the business rules. It means that the source code of the business rules never mentions the UI or the database.”
Immediately you can see he has no concept of cohesion. How do I know this beyond this example? Because he misdefines it in his here:
https://ibb.co/zVrZsTy
“This class violates the SRP…”
How does an object calculating something on from its internal state and being able to persist itself a violation of cohesion? His solution to this problem?
https://ibb.co/pK6ww20
So literally his solution of an object not being cohesive is to… not use OOP at all???
These are just a few examples I I decided to post here. But when I tell people at work that Mr. Martin is incredibly overrated and simply writes to sell books, they think I’m crazy.
LikeLiked by 1 person
Question – from a real world view point, no employee calculates his own pay. His boss or HR or Accounting would do it. The employee would know regular and overtime hours worked but would not control the algorithm used to determine how that is turned into money. Why would the object model not reflect this ?
LikeLike
Question – from a real world view point, no employee calculates his own pay. His boss or HR or Accounting would do it. The employee would know regular and overtime hours worked but would not control the algorithm used to determine how that is turned into money. Why would the object model not reflect this ?
LikeLike
The simple answer to that is that we don’t model the real world as it is, because it wouldn’t be a good model. For one, there are really not that many “actors” that actually do things. Mostly that would be just humans, animals or computers. Consider this: Does a Chessboard really know where the pieces are? Does a Piece really know where it can go to? Does an Account really know how to calculate a balance? Does a Customer really know about a database or know his/her database id? In the real world they don’t. They are inanimate objects or even worse, just concepts.
One of the mental techniques of object-orientation is to anthropomorphize objects. In other words to attribute human (actor) traits to everything. In object-orientation everything is an actor and everything has duties and responsibilities.
With this context, the responsibilities and behavior of “real” actors, that are actors in the “real world” too will shift, because other things will take over some functions, while the application itself may push new functions that the “real” actors don’t even have.
LikeLike
Hello 🙂 I read very carefully this artcile. I try find answer for my question after 4 years programming in PHP with a lot of different projects. All of them had split the architecture. I started feel like i’m not writing the OOP code just moving data from anemic objects to ‘Service’ and vice versa. I read a lot of about it and it’s making me very sad that there is no any alternative to keep pure OOP paradigm in such of projects. Is OOP dead and we can say that this is the biggest failure on programming? Maybe we should start saying it’s just procedural code with ‘namespaces’. I will be appreciate if there is ANY repo which has very good OOP design.
LikeLike
Many of the arguments you are presenting against the Layered Architecture are simply incomplete to the point of being, at best, naive and, at worst, disingenuous.
This can be summed up in the example you provide about adding an “unknown” flag to an `Account` – that it requires changes to all 3 layers of code. Well… it actually doesn’t require changes to _any_ layer. The UI should only be changed IF the UI needs to present the flag. The business logic only needs to change IF it needs to manage (or depend upon) the flag. And the persistence only needs to change if either of the aforementioned requirements is added (though CQRS could be used to alleviate this further – most modern architectures allow the UI to make ad-hoc reads as necessary). More importantly, all of these changes could be made, as required, IN PARALLEL by different teams without worrying about introducing a regression in “another layer”. You are seriously underestimating the rats-nest that can emerge by putting all of the logic for each “layer” in the same file – not to mention other possible pain points regarding scaling/deployment.
The brings us to the most fundamental problem with your argument: What do you do when the persistence model and the business model and the presentation model don’t match? Maybe a better question is: Why would they match? This entire post is predicated on the idea that there is come conceptual entity that exists in more than one layer at the same time. Clearly an `Account`, from a business POV, is going to be different than an `Account` from a user (UI) POV. The former may contain a list of `Transactions` along with `PaymentMethods` where the latter is simply a `Balance`, a `Username`, and a bunch of UI concerns.
Said another way, any similarities between data structures used with each layer is _incidental_, and will likely only retain the same shape in the most trivial of systems. You are only painting a picture above where there is some shared structure being passed between layers. Sure, if the _exact same_ data structure is being passed to 3 different functions we can make an coherent argument for inverting the system and instead placing the functions on the structure. But what happens when one needs to change? Good luck separating everything after the fact!
LikeLike
Thanks for your reply. Let me respond in order.
Introducing the “known/unknown” flag doesn’t require any change only if it is not used at all. You’re right, I did not specify the use-case further, but let’s assume the flag is introduced because the user wants to see it, and it obviously needs to be persisted. This is a real use-case from a banking software I worked on. Also, I’d like to point out, that these types of requirements (things that need to be visible in some form, influence logic, and need to be persisted) are not rare, in fact they are the norm. Business people will rarely require changes that are not visible at least.
Layers can not be properly worked on in parallel, that’s just wishful thinking. The UI always has a hard dependency on the layer below, so it always has to synchronize. You constantly have to talk to eachother, be clear about the interfaces or any changes thereof, which again, happens a lot because most changes need to be visible. At the project I mentioned above we constantly talked with the UI people, because of changes we’d implement, curiously a lot of times there were changes coming the other way, the UI people telling us what they needed for some feature they needed to implement on their side. So in practice dependencies tend to run both ways.
I did not say that the persistence model has to match the business model. In fact I agree with your points here that they don’t, shouldn’t or usually can’t even. I don’t know where I implied such a thing. I did say that some objects may have a representation on the screen, which I still say, also some objects may persist either themselves or parts of themselves to whatever database in whatever form. That does not mean that there is a 1-to-1 relationship from a database row to a screen item. In fact I would argue the other way, saying Layered Architectures sometimes want to assume that the data model is the business model. Just look at how many JavaEE projects have JPA annotations in the “business” objects.
Also, in your example, you tried to describe the “Account” in terms of what it *has*, which is exactly the mindset I am arguing against. An “Account” should be defined by what *behavior* it has. For example: Can present itself, can transfer money out, etc. It doesn’t matter what data it has. The “POV” matters, but should be clear from the context. If you are doing a web-banking app, your “POV” is the retail user (or UI user in your terms). Your business case, your domain is retail. It doesn’t matter in the slightest that other systems may define an Account differently.
LikeLike
The thesis of your post is that splitting an application according to behavior (presentation logic, business logic, persistence logic) is counterproductive. Many of the points you are making in your comment above are contradictory to one another and/or your original post.
My point about the “unknown” flag is that the 3 layers should not be considered a single “cohesive functional unit”. That they change for _different_ reasons. Can we contrive a new use-case that requires each layer to change? Of course. But that isn’t an argument against the Layered Architecture. I can just as easily contrive a new use-case where only one layer needs to change. This is precisely why it makes sense to put the logic in 3 different files instead of a single file. And to clarify, when I said changes can be made in parallel, I meant literally, physically in parallel (a good VCS could obviously help to this end as well, but conflicts are inevitable when multiple devs are authoring changes in a single file).
What does your idea of `Account` look like when the models don’t match (and therefore, by definition, are not cohesive)? More importantly, how is it defined within a single file? This is what I mean when I say you are assuming some sort of shared conceptual model exists. My argument is that there is often no such thing as an `Account` from the UI perspective — and potentially from the persistence POV either depending on the complexity of the physical model. That `Account` is just a projection used by the Business model to encapsulate some set of related behaviors and enforce invariants. It _can’t_ display itself because it does hold a reference to the appropriate slices of data to meet the requirements for display. You know what _can_ present itself? The `AccountSummaryPage`. The fact that some of the same data may be used for both is _incidental_.
My argument is exactly oppose to defining objects according to data, instead opting for defining them according to behavior. An object that exhibits the behavior of presenting itself is separate from an object that exhibits the behavior of mutating state is separate from an object that exhibits the behavior of serializing itself into storage[0]. You see? A traditional “3-Layered Architecture” recognizes that an application, at the highest level, exhibits these 3 major behaviors and asks us to define those behaviors separately from one another.
Your main argument against the Layered Architecture is more or less “I don’t want to change 3 files for a single change”. My point is that whether you change 3 places in one file or 1 place in 3 files, it still isn’t a “single change”.
[0] Annotations placed directly into the domain model for an ORM should be understood as “serialization hooks” such that the persistence model can be derived from the domain model. This means the persistence model is never defined (in the traditional sense), rather, exists fleetingly as necessary.
LikeLike
This entire post is just “let’s bash X just because it is old”. The text is littered with assumtioms and personal opinions. You are charging against an N-tiers strawman existing only in your mind, and also against monoliths and waterfall implying they are equally bad (because they are old), when in recent years, after the fashionist veil has been lifted, we know that monoliths are better than microsevice spaghetti balls, and waterfall allows for better planification and empowers programmers more than the “agile” that is practised in most companies. See? Every approach can be executed the good way or the bad way, and you will find examples of both out there.
N-tier architecture is one of the most commons found in practice. It doesn’t mean you don’t keep subdividing and abstracting the classes inside each layer. The layers can be the primary architectural division, or they can be secondary to another way of organizing code (like DDD, or “features” as some call them). Layers don’t go against OO design at all.If you follow OO design and SOLID principles you will end up with the same number of classes no matter whether you organized them vertically or horizontally.
You conclude your post claiming that “we all did X, but it is bad”. This is a fallacy. And then again. what are the alternatives? You don’t provide any.
LikeLike
Obviously I don’t have a problem with anyone disagreeing, since I know I hold a minority opinion on these subjects. But it seems to me you did not refute any of the points made.
I am curious however about one thing you’ve said, that I got a couple of times now, that my article is “littered” with assumptions and personal opinions. You say that like it’s a bad thing. Aren’t all articles and books (aside from scientific ones) opinions and assumptions based on personal experience? Isn’t the SOLID principles not based on opinions of Uncle Bob? Why would that be any more valid than this article? Isn’t your reply based on your opinion too? Unless a proper scientific study is done on some subject, we all just have our experiences and opinions to share.
This is a blog, not a scientific journal. So are all the other blogs, books, principles, architectures, designs, patterns, etc.
LikeLike