[–] eropple link

I do devops. I consult for startups. And while it would make me a lot more money in the short term to fuel their microservice-first, sparkly-architecture aspirations, this is exactly the approach I take when I pour some water on that. Your Big Ugly Monolith will get you where you're trying to go if anything will. You don't need services, you don't need microservices, you don't need some bloggable-as-heck Kubernetes setup with pet sets holding your billionty different datastores--you need a webserver and one data store and maybe a cache, eventually. You grow from there.

Where I do push, though (and this often surprises people because "what does a devops person know about code architecture"--the answer is "a lot, both from writing it and seeing it badly written"), is hard demands of app statelessness and an encouragement of business logic internals that are functional in nature, with I/O, wireups, etc. handled at the outer edges of the application. Functional-core-imperative-shell lends itself to decomposition later if you need services (and I say "need services" because the size of the set of companies that "need microservices" is within epsilon of zero)--you replace RPC with a network layer because you built your app with clean, bright-line divisions.

Rule of thumb: if you have two "service" talking to the same set of data in the same general-purpose datastore (i.e., not pub-sub, not opposite ends of a job queue), they're the same service.

reply

[–] tnolet link

This is exactly what happened / is happening at my former employer. Already 2 years of work, team of 5 engineers, Kubernetes, dozens of microservices. Lot's of time wasted on CI/CD, orchestration, monitoring, refactoring. Live date? Probably somewhere next year. It was sad to watch. 100% resume-based development.

reply

[–] avip link

RDD is 2010. It's called blog-driven development now (BDD)

reply

[–] ebiester link

I agree with the rest of this, but CI isn't hard on a monolith, and it's well worth it. (CD is a tradeoff. If you understand why it's important for you, it might be worth it, but does add complexity.)

reply

[–] Animats link

Rule of thumb: if you have two "service" talking to the same set of data in the same general-purpose datastore (i.e., not pub-sub, not opposite ends of a job queue), they're the same service.

Why? The whole point of a general-purpose database is to allow multiple applications to use the same data. Consider an online ordering system for merchandise. There's an service for checking product availability. There's a service for building a shopping cart. There's checkout and payment, the only one that faces the outside world that needs high security. There's customer order tracking. On the back end, there are various services which deal with fulfillment, shipping, accounting, and reordering. There may also be customer-relationship systems which can read order data for marketing purposes. Each of those functions can be worked on independently.

reply

[–] dasil003 link

In this case I think it's proper to consider the database as a service in its own right. This is the way it always used to be done, and there are significant advantages, such as being able to focus on safeguarding the data, and leveraging ACID and constraints/stored procs to make the application code less error-prone.

The downside is you have a potentially hard scalability ceiling, and you have coupling of every downstream service to the schema which means all teams have to coordinate with the DB team for schema changes.

I think startups now always go the vertical services route without thinking too hard about it because A) they all aspire to be Amazon/Facebook/Google even though 99.99% will never face that scale and B) resume-driven development.

reply

[–] humanrebar link

> resume-driven development

To be fair, until companies are willing to pay a premium (1) for working on bad-for-resume projects, this is very rational behavior on the part of the technical team.

1) That is, job listings can't have merely "competitive" or "market rate" salary expectations.

reply

[–] ris link

> The downside is you have a potentially hard scalability ceiling

Which 99.9% of projects will never ever hit, and those that do will by that point have become such an extreme scale that significant reengineering should be something that's happening anyway.

I couldn't agree more with your last paragraph.

reply

[–] vog link

This can't be said often enough: Scaling is a luxury problem!

It is far more likely that a company gets a "small", stable user base that provides enough income, than it is to become the next Google.

It doesn't make sense to apply all the complexity for large-scale problems when your user base isn't even small-scale. And if your monolith is structured in a remotely sane way, you can introduce microservices (or whatever will be best practice) later on.

On the other hand, it doesn't hurd to write your code with performance and scalability in mind. Neglecting this too hard may result in bad performance even for the first 10 users. But that's more in the sense of "keep it simple enough so it can be easily restructured later on". And in the sense of "measure performance to know your hot spots" ... and concentrate you efforts there.

reply

[–] ris link

> On the other hand, it doesn't hurt to write your code with performance and scalability in mind. Neglecting this too hard may result in bad performance even for the first 10 users.

Absolutely. But the architectural choices you make for making the first 1000 users fast are often the complete opposite of ones you'd have to make to make it "scale" to 100k users.

Service/microservice architectures that I've seen are, from an absolute point of view, hideously inefficient, simply because the component that needs data isn't able to ask for the data from the other component (way across the other side of the architecture) in a precise enough fashion. Cue reams of data being sent to you that you don't need because you need to figure out one small aspect of it. And of course, jsonifying it all and de-jsonifying it a few times for good measure. All this with no transaction safety.

Now, if you're operating at the kind of scale where you know that these specific calls are now part of your business critical requirements, you can spend the effort optimizing the hell out of them and reap the gain of horizontal scalability. Of course, that comes at the expense of flexibility.

reply

[–] kpil link

The main problem is that micro service based architecture is sold as a silver bullet that takes care of all problems.

Knowledgeable and experienced people know that there is no free lunch, but not every opiniated voice isn't necessarily so, and refuting bullshit takes ten times more effort than producing it, so objections might be handwaved away as backwardness if there is a strong force for a modern architecture.

I mean, more of the same old shit isn't exactly an easy sell either, even if every old bad choice could be rationalized at the time...

reply

[–] Animats link

As I point out occasionally, Wikipedia is MariaDB front-ended by various caching systems. There's some auxiliary stuff for logging and searching, but its synchronization is not mission critical. Wikipedia is the ninth most popular site on the web. You're probably not going to be bigger than that.

reply

[–] jon-wood link

Wikipedia is also massively cacheable, much more so than almost any other application.

reply

[–] Arianna34 link

My mothers neighbour is working part time and averaging $9000 a month. I'm a single mum and just got my first paycheck for $6546! I still can't believe it. I tried it out cause I got really desperate and now I couldn't be happier. Heres what I do, •••••••••>> http://www.joinmate2.com

reply

[–] rdnetto link

A variation on this would be to put the database behind a service that abstracted over the schema, though that only works for basic CRUD queries and not complex aggregations. This service would probably evolve from the monolith.

reply

[–] rhizome link

It sounds like you're describing ORM-as-API. Which, if you find you need it, great, but I wouldn't try to fool myself that introducing a mediating process like that is all that different from what the language/framework likely already provides (contemporary development frameworks are common). It's just ("just") abstracted to operate on network basis. Actually, what you describe sounds like a reduction in functionality from a non-API model.

Hopefully that parses as English, it's still morning for me.

reply

[–] collyw link

You mean like an API?

reply

[–] Olivia4654 link

I get paid over $95 per hour working from home with 2 kids at home. I never thought I'd be able to do it but my best friend earns over 10k a month doing this and she convinced me to try. The potential with this is endless. Heres what I've been doing, •••••••••>>http://www.joinmate2.com

reply

[–] undefined link
[deleted]

reply

[–] sbov link

A huge point of microservices is to create code that is independent of eachother. Your database backend is tightly coupling what should be completely independent services. Microservices were designed to scale your engineering departments just as much (if not more than) your performance. You are throwing away the major point of why microservices were created in the first place when you do what you're doing: so teams in large organizations can do independent releases. That you cannot see this as a major benefit makes me think microservices are a poor choice for your use case.

reply

[–] eternalban link

> You are throwing away the major point of why microservices were created in the first place when you do what you're doing: so teams in large organizations can do independent releases.

Let's ammend this: so teams in large organizations +{that lack both a coherent architecture and the ability to devise one} can do independent releases +{by adoptiong the no-architecture architecture}.

reply

[–] marcuslager link

"teams in large organizations"

Could be simplified to just "teams.

These days from what I can see teams in organizations with no ability to devise a sane architecture usually take the microservice route. Management loves this new buzzword. To them it means problems go away.

reply

[–] tajen link

Atlassian started rearchitecturing around services in 2013. They were 1500ppl at the time. Confluence and JIRA were not only monoliths, but non-multitenant, so they had to have one instance per customer (700Mb RAM JIRA, 700Mb Confluence, a dozen gigs on SSD). The worst was restarting upon upgrades: Easily 3-5 minutes per instance, which, at scale, was a huge burden.

After rearchitecting around services, pieces could be restarted and upgraded independently. As a customer, we didn't notice differences in the UI (e.g. the file storage on AWS).

Epilogue: They had a shared login system, multi tenant and all... which they recently replaced by a third party. Proof that services are replaceable, but also an acknowledgment that simple critical services can be hard at scale.

I personally believe that Atlassian switched to services at the right time, when it's hard to coordinate teams of thousands working on a dozen products, and when the monolithic approach was way past refactoring date ;)

reply

[–] tokenizerrr link

On the flip side, cloud hosted jira is soooo slow now.

reply

[–] tajen link

A bit late to answer, but by their EULA you are not allowed to talk about the performance of their products.

Yes, they did that!

reply

[–] eternalban link

Rather a late reply but Service Oriented Architecture is in fact an architectural approach. "Micro" services deem even the burdern of devising a coherent "service" as too great a burden to tackle.

reply

[–] ris link

> Confluence and JIRA were not only monoliths, but non-multitenant, so they had to have one instance per customer (700Mb RAM JIRA, 700Mb Confluence, a dozen gigs on SSD).

I don't understand how you engineer even a monolith like that in the first place.

reply

[–] kod link

And Jira still can't handle email in a sane fashion...

I wouldn't look to Atlassian for anything regarding engineering practices.

reply

[–] tomerbd link

can you describe a coherent architecture that will turn microservices uneeded?

reply

[–] eternalban link

Missed this but in case you read this, well that depends on the purpose of the system, but in most cases modular schemas and allowing for loose coupling via data/message bus does the job.

reply

[–] tunesmith link

If that needs to be scaled out to many services, then you also want to scale out the data model. Meaning, events in a commit log, and services that subscribe to that commit log to create their own schemas that are tuned for their own needs. The more I think about commit logs, the more I think giant monolithic database schemas are just kind of weird - a strange compromise between history and state.

reply

[–] xxxn link

this sounds like an absolutely enormous pain in the arse (cognitive overhead if you prefer.) what, specifically, is the bigger pain in the arse out there that one needs all this in order to avoid? it must be pretty awful.

also, what happens down the line when you suddenly need a service which is contingent on the data stored in multiple other services? do you add another synchronisation layer over the top?

reply

[–] catern link

That's an interesting idea. If I understand correctly, you're suggesting that services simply broadcast the data mutations they are going to perform. Then, any other services that wish to use that data can filter those broadcasted mutations down to what is relevant to the other services, and store the data in a way that is efficient for what the other services want to do.

Are there any examples of things architected in that way?

Doesn't it also run into the problem of needing to version the events that you broadcast?

reply

[–] jamesblonde link

It's the Kafka cool-aid being regurgitated. Kafka is the pipe that you need. Micro-services provide APIs to state that they manage that they get from Kafka. That all scales out and services have nice independent failure modes (as long as Kafka doesn't fail). What it doesn't consider, however, is if you have to perform atomic operations across micro-services (you can't easily do this with existing technology). And, yes, you would like exactly-once processing of events, which is non-trivial - even with the new exactly-once message reception feature in Kafka.

If your services are completely separable, this approach is great. Otherwise, monolith it.

reply

[–] tomnipotent link

Event Sourcing [0] & CQRS [1] come to mind. I'm also a fan of Bottled Water [2] that turns Postgres ops into a stream of events that you can then push to other services.

[0] https://martinfowler.com/eaaDev/EventSourcing.html

[1] https://martinfowler.com/bliki/CQRS.html

[2] https://github.com/confluentinc/bottledwater-pg

reply

[–] taurath link

It's hilarious to me that that came up, because that was a big part of the 6 month delay.

reply

[–] raarts link

Any particular one or all three of them?

reply

[–] undefined link
[deleted]

reply

[–] undefined link
[deleted]

reply

[–] karmajunkie link

Whether they can be worked on independently is not the criteria that matters (and in point of fact, its a false assertion in this case if they're using the same database, as neither can iterate on the schema of the data completely independently of the other). What matters is, can I take this service completely offline and have the other still function? If the share a datastore, then the answer is no.

reply

[–] srtjstjsj link

All services depend on a router, so you can't take any one service _completely_ offline, so they are all part of one service.

reply

[–] erikpukinskis link

You presumed before you even started design that there would be such a thing as an order, which would be implicated in shopping, payment, and tracking.

There could also be separate order, invoice, and delivery.

Essentially your question is: what if my models are highly overloaded? Won't they need to be used in multiple places?

And of course the answer is yes. But if you are using the same model in dramatically different contexts you probably have a problem of another sort. This is the dreaded "One Object To Rule Them All" OO anti-pattern.

reply

[–] Ma8ee link

And one day you realize you have to change your data model, and absolutely all your application are coupled to your whole database. Oups!

reply

[–] Animats link

You can add a column to an SQL database without difficulty. If you're using one of those "turn database rows into objects" packages, you may have problems, but if you use SQL properly, it's fine. Components that are only data consumers will not even see the new column.

reply

[–] olmo link

There is a solution for this problem: I use a monolith that make 100% of my sql queries using LINQ (including massive updates and so). Since Linq is strongly typed you can rename/remove/split tables and columns, fix compilation issues, create a migration and deploy the new monolith. We create an average of 5 migrations per day in a project with around 300 table for 2 years and no signs of an unmantainable mud so far.

Disclaimer: I'm the main developer behind Signum Framework

reply

[–] noir_lord link

Just inherited exactly that.

The final cherry is that it uses zero joins, it's all done in code.

It's going to be a bastard to unfuck that mess.

reply

[–] noir_lord link

Just inherited exactly that.

The final cherry is that it uses zero joins, it's all done in code.

It's going to be a bastard to unfuck that mess.

reply

[–] taurath link

Couldn't agree more - inheritance chains and heavily stateful instruction environments are the number one blocker to eventually decoupling a monolith. The "enterprise" code style is absolutely what you need to avoid in a startup.

reply

[–] JBReefer link

Absolutely agreed, where Enterprise === "making a profit"

reply

[–] quickben link

If you avoid what you listed in actual software, you'll be left with a mess they will take years and tons of money to fix.

reply

[–] beat link

Yes, but if you don't avoid it, you might never get off the ground to the point where you have years of time and tons of money in the first place.

reply

[–] quickben link

I guess it depends on the viewpoint. Yours sounds valid. Mine comes from the occasional fixing people's messes. But I agree with you, some product is better than no product in most cases.

reply

[–] beat link

I have a lot of career experience dealing with legacy software. I feel that pain intensely. But still... a working product with bugs and flaws is better than a not-working product that is elegantly built. Or half-built.

reply

[–] undefined link
[deleted]

reply

[–] srtjstjsj link

what is "enterprise" code style?

reply

[–] Turbots link

Exactly.

Make a monolith first, but do it with clean code and with twelve factor apps in mind. That's why I love using Spring Boot.

reply

[–] staofbur link

I'd argue it's cheaper scaling out the monolith or introducing isolated functional silos and scaling them out than even bothering migrating everything to microservices. Also you certainly can't port a complex monolith to microservices; you have to start again. We're quite happily able to shift 15,000 requests/sec from over 2000 http endpoints with a monolith and our kit is at ~20% capacity. Need more? Slice up a silo some more.

Key aspect when you design the monolith is not building a spaghetti monster. If X doesn't need to talk to Y, put a wall there (package level)

reply

[–] eropple link

> Also you certainly can't port a complex monolith to microservices

Upvoted you, but I disagree with a little. "Microservices" tend to pathological cases, but you 100% can segment a monolithic apps into services. Each module in your application has an interface. Behind that interface, replace it with network calls that fulfill the interface's contracts. (If your monolith doesn't have sections that encapsulate work--well...kinda did it to yourself, but that IME is rare.)

reply

[–] staofbur link

You can, hence my comment about silos. The interface is rarely well defined enough to be able to abstract it over a network boundary from experience. One of the things I see in a lot of monoliths is leaky abstractions and they are really difficult to reign in.

We tend to go for throwing related web front and and API endpoints into the same ball of mud silo and back end that directly with the storage and cache services. We scale those silos up and partition across tenants too. I think a couple of the silos are around the 1 million LoC size now as well.

reply

[–] eropple link

Fair enough. In places where I've had to deal with that, I was in the code review seat ahead of time and was generally able to go "no no no, let's not do that"--but I can see how stuff can decay.

reply

[–] staofbur link

I'm always late to the party. I'm the cleanup team :)

reply

[–] to3m link

> Behind that interface, replace it with network calls that fulfill the interface's contracts

Just like that?!

You can encapsulate work fine, without making microservices at all easy to retrofit. There's a big difference between calling a function that does something and returns a result, and calling a function that does nothing until you've gone back to the main loop to handle network input.

(You can confront many of these issues by implementing your services as threads from day 1. Decoupling request and response is a major issue, and this will force you to do that straight away. The other issues associated with moving from single process to multiple processes are fairly minor by comparison.)

reply

[–] srtjstjsj link

> There's a big difference between calling a function that does something and returns a result, and calling a function that does nothing until you've gone back to the main loop to handle network input.

the difference is latency, so of course you put extreme low-latency operations inside the same service.

reply

[–] fishnchips link

That. Most folks understand services as separate codebases, running as separate deployments. My take is that they are logical entities to a degree where their physical implementation is irrelevant to their clients.

reply

[–] sanderjd link

I've experienced the opposite: you get traction with a monolith, and when its time to expand and scale, you find yourself with a massive kludge that makes it hard and slow to make progress.

Which is not at all to say that you're wrong - a long delay in the capability to release something can definitely be an earlier and more severe death knell than friction when trying to grow.

For me personally, I like to see all of: tools that aim to make it easier to modularize from the start, tools that aim to make it easier to modularize an incumbent monolith, and tools that make monoliths work better themselves. I think all of these solve real problems for different teams at different stages of growth and facing different trade offs.

reply

[–] dasil003 link

Sure, but if you build microservices from the start, you die before you get traction because refactoring takes too long while you are finding product-market fit.

The right way to do it is to write a modular monolith and maintain a loose plan on how to break it up later. Admittedly this is difficult in SV's youth-obsessed culture where "senior" devs have 5 years experience, and staying at a company long enough to understand the long-term repercussions of architectural choices is seen as a black mark on your resume, but on the bright side languages like Elixir/Erlang help you do the right thing more naturally than more traditional languages.

reply

[–] NathanKP link

> because refactoring takes too long

In my experience refactoring microservices only takes a long time if the microservices are badly done, just like refactoring a monolith only takes a long time if the monolith is a horrible piece of spaghetti code.

If refactoring requires modifying multiple microservices then the microservice boundaries and contracts were badly designed at the start. When you have practice drawing boundaries well then refactoring with microservices becomes very fast, as you only have to modify the code of one service and redeploy it.

I think too many microservice enthusiasts have a tendency to claim monoliths always turn into spaghetti code, and monolith enthusiasts claim microservices are always slower and hard to refactor. This is just a sign that neither group of architecture fans knows how to do the other architecture properly, because its possible to build microservices fast and refactor them quickly just like its possible to build a monolith that is clean and scalable.

reply

[–] joncrocks link

I think the parent's comment around refactoring was precisely that you don't know what the contracts and boundaries between your microservices should be before you build them, as you don't know much about the problem space yet.

reply

[–] Steeeve link

> where "senior" devs have 5 years experience, and staying at a company long enough to understand the long-term repercussions of architectural choices is seen as a black mark on your resume

I had to laugh at that...

And then throw up just a little.

reply

[–] sanderjd link

The point of my comment is that I both agree with all that and like to see people working on the problem of making it work better to start with microservices if that's what your team thinks is best for your project. It's not a law of the universe that microservices must result in death. It's a tooling problem. I don't think the tools are quite there yet, but they've improved a lot in the past few years. I think that's a good thing and look forward to even more improvement in the future.

reply

[–] Daishiman link

> I've experienced the opposite: you get traction with a monolith, and when its time to expand and scale, you find yourself with a massive kludge that makes it hard and slow to make progress.

If you have a team that can't make a monolith scale because of kludges, that says more about the team than the monoliths.

reply

[–] sanderjd link

sigh this cliche adds little to the discussion. You could say the same from the other side: "if you have a team that can't make microservices work because of tangled service dependencies, that says more about the team than the microservices". That also adds little to the discussion.

Different architectural choices lead to different challenges. Different teams in different organizations will rise to those challenges with different degrees of success. But it's still worth thinking about the trade offs of the different architectures from the perspective of what they make easier and harder.

reply

[–] BurningFrog link

I think the "micro" part of the name is really unfortunate for "microservices".

"Micro" means "one millionth", so it's easy to get the impression that you should have at least thousands of "microservices".

In reality, if each of your microservices don't fill up at least dozens of heavily loaded servers, you're better off with a monolith architecture.

reply

[–] heisenbit link

There is one area where I feel it makes sense to consider going into a micro-services direction from the beginning: User Accounts. It is clearly re-usable, can increase security and while more effort it is not so costly.

reply

[–] brightball link

That's actually one of the biggest perks of Elixir IMO. You can a monolith that's basically already separated into microservices.

reply

[–] taurath link

If you don't have a product yet or the parameters could change quickly with new business insight, you need to be able to change it fast. With microservices you will be spending half your time figuring out orchestration, building data flows that people can understand, and doing ops. Last startup I was in delayed their launch date for >6 months because of their architecture. Way too many people think they need it, but a load balanced monolith can take you from 0 income to able to hire more engineers.

reply

[–] transitorykris link

"Any organization that designs a system (defined broadly) will produce a design whose structure is a copy of the organization's communication structure."

This is a statement about an existing organization and the system they design. It doesn't say you can design a system and then the organization of your choice forms around it.

Bezos had to start with a management edict about how the organization is structured (by threat of firing) to get the SOA system he desired at Amazon. https://plus.google.com/+RipRowan/posts/eVeouesvaVX

reply

[–] phamilton link

> This is a statement about an existing organization and the system they design. It doesn't say you can design a system and then the organization of your choice forms around it.

I think both systems and org structure are iterative and evolving. A mismatch between the two will cause friction. Change the org structure and the monolith starts looking like independent services except without the conveniences of microservices (deployment, isolation, independent scaling, etc). Try to change the monolith without changing the org structure and your microservices will eventually turn into tightly coupled RPC hell.

Sometimes that friction prompts change in the org, sometimes it prompts change in the code.

reply

[–] crdoconnor link

>Microservices allow and require low coupling in the organization. If you want to reduce coupling in your org, you'll be well served by microservices. If you want tight collaboration in your org, you'll be well served by a monolith. As orgs grow into multiple independently executing units, a monolith starts to limit the ability to independently execute.

Most individual computers combine seamlessly interoperating software that was, ultimately, written by tens if not hundreds of thousands of people, most of whom do not know one another - all the way from the kernel to the highest level scripting languages.

I think you're right microservices are primarily about Conway's law. I just don't think it's about operational effectiveness (it actually makes operational effectiveness harder, IMO).

I think it's primarily successful in large orgs because it minimizes finger pointing by establishing tighter loci of responsibility.

My rule of thumb is "could this service be spun off as a separate business?". E.g. postcode->address service, sure, microservice that. Image manipulation service->maybe yes, maybe no, "user" service-> no, price calculation service->no

reply

[–] ValentineC link

> My rule of thumb is "could this service be spun off as a separate business?". E.g. postcode->address service, sure, microservice that. Image manipulation service->maybe yes, maybe no, "user" service-> no, price calculation service->no

Curious: why would a "user" service not be viable as a separate business? There's always space for one more authentication option.

reply

[–] mixedCase link

I believe a simpler explanation is that it's easier for some devs to cleanly separate concerns when they're forced to do so by the constraint of process separation, rather than language modules/packages, where it's too easy for a junior dev to break the architecture with a single import.

Keeping a monolith's concerns cleanly segregated does require a small amount of discipline.

reply

[–] XorNot link

I would propose that an alternate explanation is that when the processes are actually separate in deployment, you filter out strong personalities (or management types who are overly involved in things outside their area) from dominating the development process as well.

Stopping someone working in one area from deciding they just don't like the look of the other is a benefit or all its own.

reply

[–] crdoconnor link

>it's easier for some devs to cleanly separate concerns when they're forced to do so by the constraint of process separation

This is a dangerous ideology. Process separation doesn't enforce loose coupling, it simply makes the pain of tight coupling between them that much worse.

I worked on a "microservice" system once where I regularly had to debug a problem across four different service boundaries and two different languages. It felt like I was using boxing gloves to perform surgery on a rube goldberg toy.

reply

[–] phamilton link

Conway's law explains why that discipline is harder in practice. If everything about the business suggests that two things are tightly coupled, then naturally someone is going to make that import.

reply

[–] lucisferre link

Modularly designed software, with appropriate boundaries can accomplish the same thing without the overhead of microservice deployments.

During the early stages where there is maybe 1-2 people coding, microservices add a lot of unneeded overhead and orchestration.

reply

[–] tyurok link

It's also about the teams being independent of processes, stacks and tools. Usually monoliths have infrastructure, library and tools constraint that are hard o break.

Yes, microservices adds overhead in tooling. You have to measure carefully the trade-offs. It only pays off in a way larger scale than most of the organizations.

reply

[–] undefined link
[deleted]

reply

[–] watwut link

Proper collaboration in human organization is best served by clear responsibilities, roles and accountability. The monolith in big organization leads to turf wars, random dudes attempting to micromanage components they know nothing about, committees designing simple dunct I nality etc. So you split that in pieces with clear lines between then and voila, hours long bullshit meetings are no longer necessary.

reply

[–] phamilton link

I all comes back to Conway's Law (Your software will look like your organization).

Microservices allow and require low coupling in the organization. If you want to reduce coupling in your org, you'll be well served by microservices. If you want tight collaboration in your org, you'll be well served by a monolith. As orgs grow into multiple independently executing units, a monolith starts to limit the ability to independently execute.

reply

[–] ris link

> Having distinct silos of activity/responsibility, separate teams and communications channels; all can make a large project more manageable than the monolith by allowing the lower level problems to be abstracted away (from a management perspective).

It also has the effect of making small projects large even if they never needed to be in the first place (which of course looks great on a resume)

reply

[–] srtjstjsj link

Reliability is not important to a startup.

http://www.whatisfailwhale.info/

reply

[–] JshWright link

Reliability is not important to _many_ startups.

There are plenty of industries where solid reliability is a hard requirement.

reply

[–] pjmlp link

Only because people got to learn software isn't to be trusted.

If companies got heavy fines like in other industries for selling broken products, the scenario would be quite different.

reply

[–] upbeta link

not until one realize it is..

reply

[–] khana link

Don't agree. Get it right the first time.

reply

[–] morphemass link

Just about every one I've interviewed with recently has been breaking their monolith up into micro-services for some reason.

When I've done this in the past I had a key goal: reliability. The cost was about 10x the development effort of the monolith in order to add an extra 9 to the reliability. The monolith was wonderful for getting up and running quickly as a business solution but it actually crippled the business because they had failed to identify how essential reliability was. KYC.

Personally I've come to the conclusion that the main benefits of SOA/MSA are not necessarily technical but more organisational/sociological. Having distinct silos of activity/responsibility, separate teams and communications channels; all can make a large project more manageable than the monolith by allowing the lower level problems to be abstracted away (from a management perspective).

reply

[–] maxxxxx link

A lot of people don't have the discipline to write decent libraries so they need the overhead of microservices to force them to structure their code reasonably. It seems to me you can have exactly the same boundaries between components that you get through microservices by just having a good compenent separation.

reply

[–] tiziano88 link

I feel 100% like you. Most developers won't think about clear interfaces between components unless they are forced to do so by an RPC layer. I think at some point we are going to realise this and step back on our footprints , ending up with a monolithic architecture with clearly defined boundaries between components enforced by expressive type systems.

reply

[–] mgkimsal link

> Most developers won't think about clear interfaces between components unless they are forced to do so by an RPC layer.

The developers I know who can't think about clear interfaces are also the ones that won't know how to write clear RPC, and won't be able to create clean microservices.

reply

[–] maxxxxx link

But they will have "microservices architecture" which looks better on your resume than monolith.

reply

[–] eropple link

This is exactly true. And so is the reverse: competently written code can be segmented out into a SOA by replacing your internal procedure calls with a network call.

reply

[–] Terr_ link

Caveat: If it's acceptable to have significant latency or the interaction is asynchronous.

reply

[–] eropple link

Sure, but if it's not acceptable to have significant latency then they're probably the same logical service, yeah?

reply

[–] undefined link
[deleted]

reply

[–] srtjstjsj link

if you have component separation, you have microservices

reply

[–] qwer link

I think it's more nuanced than that and one can consistently hold both viewpoints at the same time. You just need to have the experience that a soa-first approach is even worse than the ball of mud.

reply

[–] gluczywo link

even experienced architects working in familiar domains have great difficulty getting boundaries right at the beginning. By building a monolith first, you can figure out what the right boundaries are

This line of thought reaches two decades ago and was expressed in a wonderful essay "Big Ball of Mud" http://www.laputan.org/mud/

EDIT: updated with the quote

reply

[–] FRex link

The common pattern he mentions reminds me of the concept of 'semantic compression' (one big function and lots of variables first, then break it up into structs, classes, functions, etc.) by Casey Muratori: https://mollyrocket.com/casey/stream_0019.html

It's a very nice and natural way to write code to do it all horribly dirty and only when a sizeable portion if ready to start cleaning it up and making it look and read good.

Both are basically "good comes from evolving/refining bad".

reply

[–] dankohn1 link

It may be a bit simplistic for HN, but you may enjoy I talk I've given, "Migrating Legacy Monoliths to Cloud Native Microservices Architectures on Kubernetes", and especially the visual metaphor from slide 26 on of chipping awsy at a block of ice to create an ice sculpture.

https://docs.google.com/presentation/d/105ZgwafitwXH6_sWevFH...

reply

[–] jakozaur link

Rule of thumb: Number of full-time backend engineers divideed by 5 and rounded up is number of microservices you can afford.

E.g. if you have 500 employees having 100 microservices is fine. If you have 3 engineers and try to have 20 microservices you are wasting tons of time, you should do monolith.

reply

[–] ChristianGeek link

What is "3 depths deep?" Google doesn't turn up anything.

reply

[–] navalsaini link

Services call other services in a microservice architecture. Three depths deep would mean that, there are only three network hops and no more (its strangely similar to inceptions 3 dreams deep adventure). Each network hop adds latency and a threat of failure from network layers. So usually a 3 levels deep rule is followed to keep the failure levels low, making debugging easier, time to find the culprit low, etc in a microservices architecture.

reply

[–] hderms link

law of demeter?

reply

[–] navalsaini link

I agree, monolith first and have proposed a talks to few JS conferences on this topic. I however have not worked in a company that uses microservices architecture at a big scale (like uber, instagram, etc). I am keen to understand - (1) what does it mean to run a microservices architecture from an org. point of view? (2) How are principles like 3 depths deep enforced? (3) How does a developer decide to create a new microservice vs when to reuse one? (4) Who manages the persistence layer and associated devops tasks (backups, failover, repset, etc)? ... mostly that is the uncovered bits for me. I came across a very recent talk by uber on these lines - JS at Scale (https://www.youtube.com/watch?v=P7ek4scVCB8). I think a few talks on the organizational side of microservices would give people a clear idea if they really need one. Also, though the startups use the term microservices, but their architecture does not in reality has as many boxes as compared to the uber talks that most of us listen to. The startup microservice architectures do have single point of failures and they just break it up to make it easier to scale beyond a 100 or so concurrent users. The decomposition is mostly around tasks that are IO bound (serving APIs) and other tasks that are more CPU bound (some primary usecase). So startups using microservices may not be that bad actually. They could just mean that we do an RPC using redis for some computationally intensive usecase.

reply

[–] olingern link

Any advice you would give to a team going this route?

And, I've found that integration tests help keep my sanity if a refactor occurs anywhere. Any learning(s) for adapting to refactors across services?

reply

[–] alexandros link

I think keeping all the data in one place has been one of the smartest things we did. If we'd been crazy enough to give each microservice its own persistence, we'd be neck deep in chaos by now. It only happened for one service due to reasons that we should have ignored at the time, and it keeps biting us in the rear to this day. Thankfully we're getting closer to reversing that mistake, oh happy day.

reply

[–] alexandros link

We started resin.io with a microservices architecture from day one, and we are still happy with the result. It was very painful to get it up and running, but once that was over, we were good to go. The boundaries we defined early on are still solid, and the result works well. One critical detail however, is that all our persistent state lives in one place, minus specific well-understood exceptions. Arguably, starting with microservices helped us define strong boundaries we weren't tempted to blur over time.

All this said, I do sometimes wish we had started with a monolith, if only because we paid the microservices tax in deployment and infrastructure maintenance way too early, long before we had the scale to warrant it. I feel starting with a monolith would have probably meant more progress in less time, though with a risk of not being able to refactor smoothly when the time came.

Overall a hard call to make, since I'm happy with the result, but wonder about the pain it took to get here, and at the same time counterfactual universes are hard to instantiate...

reply

[–] nichochar link

I disagree with this, but only because it makes the assumption you're working with a single core type language, like java, python or C++.

I think if you design fault tolerant micro service based services with something like the erlang BEAM VM, things will workout well, since you're being very careful about message passing from the beginning.

reply

[–] true_religion link

From reading the wiki article, it says the goal was to replace the payroll system for 89,000 people by 1999. By 1997, it went live in a staged rollout for 10,000 employees. Within the same year, the performance of the software improved 8300%.

Sadly, Chrysler was bought[1] out in 1998 and the project was canceled in the year 2000 for unknown reasons. 1999 when it was intended to be fully deployed, Crystler was in the middle of a reorganization, which would lead to layoffs of more than 21,000 people in the next 3 years.

I'd guess organizational politics had more to do with the cancellation than anything the development team did wrong.

[1] Oh, sure it was labeled "a merger of equals", but consider that the share price fell to 50% within a year afterward and Crystler was eventually split off and sold... it just makes me think of the Time Warner + AOL merger.

reply

[–] bsder link

Starting here also gives an interesting view of whether of not it was a "success": https://books.google.com/books?id=Nxi7O7FCdIEC&pg=PA43&lpg=P...

To be fair, he has an axe to grind, but C3 was quite far from a success.

To be fair, big projects fail. A lot. For many reasons. So, I'm not going to blame the people involved.

And I find Fowler to actually be probably the most level-headed of the bunch to come out of that project.

However, I WILL apply requisite amounts of skepticism when those same people start peddling their "expert knowledge" about subject matter in which they provably didn't "beat the average".

reply

[–] acdha link

That reads like a lot of accusation without much evidence, and the author's credibility was immediately called into question for me when they dismissed Y2k as a non-issue because it didn't produce widespread problems, ignoring all of the problems which had been fixed in the previous decade. (This is like the people saying closures after a snow storm weren't necessary because there was no traffic, ignoring the reason why)

The only real lesson I feel comfortable drawing from that is that we need more diverse case-studies than C3 because very few other projects are going to have the same environment with a huge company, multi-divisional politics and then a merger happening shortly into the project, one of the best defined problems, etc.

reply

[–] Steeeve link

People spend entirely too much time debating how to avoid a giant ball of unmaintainable junk while driving their projects to inevitably become giant balls of unmaintainable junk.

You can do monoliths bad. You can do microservices bad. You can spend too much time architecting and not enough time understanding where the practical design considerations lie.

My philosophy is to build with the intention of delegating. If you can communicate how your software works well enough that someone can take over maintenance, then you've done your job. Better if you can communicate it well enough that you can delegate it to separate groups with differing responsibilities.

Any system can get complex to the point where it's dangerous to make changes.

reply

[–] bsder link

Fowler failed miserably when building a monolith. See: https://en.wikipedia.org/wiki/Chrysler_Comprehensive_Compens...

Why should we believe his statements about microservices?

Personally, my experience in microservices vs monolith has been as follows:

If your system needs fast update with quick rollout of new features, monolith is probably superior. Being able to touch everything quickly and redeploy is generally quicker in a monolith.

If your system needs to be able to survive component latency/failure, microservices are probably superior. You will have hard separation that enables testability from the beginning.

Overall, I find the monolith vs microservices debate insipid. We have LOTS of counterexamples. Practically everybody writing Erlang laughs at people building a monolith.

reply

[–] kishorepr link

Like others have pointed out here, it's incredibly hard to know the application boundaries up front, which are are required for building micro services.

I think solutions that are a hybrid of Monolith and Microservices work out well. As another person pointed out.. this can be fairly easily achieved by having a monolith with multiple sub-projects to get separation of concerns. The code is all in 1 place so it's easier to design and refactor. You can also deploy different sub projects as microservices if you need to later on. So it's basically having a monolith with separately deplorable sub-components.

Once boundaries are clearly understand, it can then be easier to physically separate services

reply

[–] decisiveness link

What many seem to have missed from this is the bit at the end where Fowler concedes:

> I don't feel I have enough anecdotes yet to get a firm handle on how to decide whether to use a monolith-first strategy.

after linking and mentioning points of a guest post [1] (with which I strongly agree) which argues against starting with a monolith. A key part from that post:

> Microservices’ main benefit, in my view, is enabling parallel development by establishing a hard-to-cross boundary between different parts of your system. By doing this, you make it hard – or at least harder – to do the wrong thing: Namely, connecting parts that shouldn’t be connected, and coupling those that need to be connected too tightly. In theory, you don’t need microservices for this if you simply have the discipline to follow clear rules and establish clear boundaries within your monolithic application; in practice, I’ve found this to be the case only very rarely.

[1] https://martinfowler.com/articles/dont-start-monolith.html

reply

[–] yellowapple link

Even if you're building a monolith, though, you're generally well-served by a monolith that pretends to be a bunch of microservices - i.e. it could be split into microservices easily if the need arises, kind of like how some "hybrid" OS kernels could (in theory) be split into proper microkernels if the internal function calls were replaced with messages (the NT kernel is built this way, IIRC). Each part of this "chunky" monolith should provide a proper internal API, and no other part should have to call into that part's internal functions.

This should be easy to achieve in most "object-oriented" languages (like Ruby; a Rails monolith should have no problem being structured this way, even if quite a few of the ones I've seen in the wild seem to forego this). Erlang (and Elixir by descent) is also well-suited to this, since you can break your application into a collection of processes that - whether individually or in combination with other processes - can act like their own little microservices.

reply

[–] eropple link

I don't mean to pick on you, but your post is a litany of "doctor, it hurts when I do this!". There's nothing that should be preventing your monolithic application from handling significant load. You add more instances of your monolithic application. If you're encountering problems where horizontally scaling your monolithic application is falling down, you have much more severe problems that introducing network boundaries don't solve.

Monolithic applications don't encourage naive approaches; your conscious and unconscious choices around desired development rigorousness do. If those choices are causing you problems, there are much easier ways to prevent yourself from doing it than introducing network boundaries.

reply

[–] olingern link

> There's nothing that should be preventing your monolithic application from handling significant load.

I don't disagree at all. As with most things in life, throwing money at the problem will solve this. In a lot of positions I've been in, this cost becomes "the cost of doing business" that everyone assumes rather than, "we can do better, but for now this is our reality."

> Monolithic applications don't encourage naive approaches;

Perhaps, naive is a poor choice of words. Maybe, an example is better:

You develop a large system that's going to support an entire state's childrens meal plans. You can easily do this in a monolithic architecture, but is it worth investing time to save on peeling apart the obvious points of high use now?

Children / parents only have to sign up once, but they need access to their meal plans everyday. I see two services here in this simple example, and if I'm strapped for time -- I defer a user management service to some sort of SAAS / PAAS so I don't have to deal with that overhead.

I see a great deal of value to everyone in investing in the optimizations that have high return, but minding Knuth's rhetoric that "premature optimization is the root of all evil."

reply

[–] eropple link

Dude...you're misunderstanding me. "Throwing money" at a monolithic application doesn't make it faster. Writing code that isn't kinda shit is. And it is just as easy to write that code when multiple modules are in a single process and intermediated by method calls between them and putting each module in a separate process and intermediating between them with HTTP. All you're doing is adding a network boundary. If you're gonna write bad code, you're gonna write bad code in both and you're gonna have the added problem on top of having to then wrestle with bad code and having to understand how to debug/flow-monitor/etc. everything.

Your example is an emoji shrug, because I use Passport, a Postgres table, and a user token in a cookie and I'm done. It's not a big enough problem to break out across a network boundary because it's just code that may or may not be hit on any given server and I totally don't care if it is or not because a given node not running a subroutine costs nothing. It's trivial; don't complicate it.

reply

[–] dasil003 link

> If I design in a "microservice first," or just a service oriented design -- I find that there is much more clarity in system design.

Why is this though? There is nothing stopping you from thinking about architecture in a monolith, or deploying a monolithic code base to different server classes to optimize workloads.

Where microservices really come into their own is when you want to scale and decouple your engineering teams. At that point, the effort of defining and maintaining "public" interfaces between services pays dividends by providing a defacto specification that serves as a talking point between teams who literally do not have to know the inside of the others' black box. If everyone has to know the internals of multiple microservices, then why are you paying for that overhead instead of an internal method call that has all the benefits and assurances that a language can give in a single process rather than whatever pale imitation you get through RPC.

I'll concede that it depends a lot on the problem domain. Perhaps the service boundaries are obvious, the interfaces stable, and so you can easily reap the benefits without a lot of refactoring. Okay, that's a possibility. But in most cases I have to agree with Martin Fowler that when you embark on a new project you just don't know enough about the requirements to make that call. Unless you've already built the thing you're about to build, I think you very rarely will have the prescience to design the service boundaries correctly on the first go.

reply

[–] phaed link

> I find that the microservice first approach makes me consider future optimizations, such as caching policies, whereas, in a monolith, I would proceed in a naive, "I'll figure it out later" approach.

This goes to his point, the time spent considering these points is time gained in delivering an MVP to market.

> "Hey, we just built this great MVP for you. It probably won't handle significant load, so we're going to go off in a corner and make it do that now. Oh yeah, we won't have time to develop new features because we'll be too busy migrating tests and writing the ones we didn't write in the beginning."

You don't need that level of scaling potential in an MVP, and you get to have something out -- in time. While if you had started with a microservices approach you would still be off in that corner trying to get something usable out. Now you have a product in the hands of users, and your refactoring can consider that feedback.

reply

[–] olingern link

> This goes to his point, the time spent considering these points is time gained in delivering an MVP to market.

In undiluted markets where you're entering 'new territory' -- sure. Get a product in user's hands and let it evolve how it should. There are different scenarios where companies already have a user base and you have to be able to respond to that load within your MVP.

Referring to what I said in my original comment, there are always trade-offs, and monoliths have their place. If you have the luxury of time and money, I would say that a monolith is less than ideal.

> You don't need that level of scaling potential in an MVP, and you get to have something out -- in time. While if you had started with a microservices approach you would still be off in that corner trying to get something usable out. Now you have a product in the hands of users, and your refactoring can consider that feedback.

Don't disagree with this at all. Context of your market probably dictates whether you'll be in a corner or still figuring out service boundaries.

reply

[–] sbov link

> significant load

Please define significant load.

reply

[–] tunesmith link

Sometimes it's not about load, but speed of innovation. A huge complex monolithic codebase might not have a lot of load, but it can still limit a team's ability to experiment with new features due to big ball of mud. Decomposing areas into service might enable faster innovation better than refactoring the whole monolith.

reply

[–] acdha link

That seems orthogonal to me: is adding a network boundary really the only way to enforce basic software engineering practices? It seems just as likely that the same organizational issues would lead to e.g. learning that your data model is wrong and part of the app now needs to dispatch thousands of queries, and fixing this is harder than refactoring a couple parts of the same codebase.

(Note: I'm not saying microservices are bad – I just think that the process which lead to that ball of mud will unfold similarly with a different methodology)

reply

[–] olingern link

30+ requests/sec

Some would consider it light, but I work alongside a company that is currently struggling to get beyond that mark.

reply

[–] sbov link

So it would probably surprise you to hear that companies I've worked for in the past have built monolithic applications serving pages that were per-user dynamic that can handle upwards of 20,000 requests per second?

This is why I hate this subject. People use terms and don't define them. If you think microservices is the only way to scale past 30 requests per second you're extremely wrong.

reply

[–] eropple link

I need to give you an internet fistbump for this. I mean...you know what you can even do instead of The Holy Microservice? You can take parts of your API, facade them behind a different load balancer, and call into different instances of your monolith that only handle user management or billing or whatever. Un-run code's cost is, in the general case, basically zero--act like it. You don't need to do something like this, of course, unless your system has very spiky/expensive calls that have to be intelligently routed to systems that have capacity, but heck, you've just insulated yourself from load spikes and can granularly scale.

If you need to. You probably don't, and you definitely don't if you're struggling with 30 reqs/second. This isn't just YAGNI. This is YAHYBDI. You Are Hurting Yourself By Doing It. You need to write better code and examine the assumptions that have created the mess you're dealing with.

reply

[–] true_religion link

That sounds like it's SoA. It's the forgotten intermediary step between a monolith and a microservice architecture.

reply

[–] olingern link

Not sure why the contempt with an opinion.

I get that microservices are trite, and most people think they need them long before necessary; however, they have uses beyond scale.

I'm sure that there are ways to mitigate every point I can make within monoliths.

My points are just opinions.

reply

[–] eropple link

It's not contempt, it's frustration because you're making assertions that are not backed up by reality.

This is what I do for a living, and I am regularly but-but-microserviced by people who are equally ignorant of competent application design and who think that breaking it into HTTP-intermediated chunks will solve that they are choosing to write bad code. That segmentation doesn't--it does nothing. It isn't YAGNI, but YAHYBDI, and I'll get hot under the T-shirt occasionally 'cause people who read discussions like this will get the wrong idea and stick their hands into the saw, too.

I could get paid more by letting people mangle themselves, but it'd be mean.

reply

[–] bpicolo link

Their main use is scale, but not tech scale. Scale of engineering teams.

There's the occasional case where a single language won't fit the bill, but there's a big difference between 2 services and 200.

reply

[–] olingern link

> Almost all the cases where I've heard of a system that was built as a microservice system from scratch, it has ended up in serious trouble.

I wholeheartedly disagree with this point.

I've found that if I build monolith first, it becomes harder to draw the line of how to separate endpoints, services, and code within the system(s).

If I design in a "microservice first," or just a service oriented design -- I find that there is much more clarity in system design. In terms of exposing parts of a system, I find that the microservice first approach makes me consider future optimizations, such as caching policies, whereas, in a monolith, I would proceed in a naive, "I'll figure it out later" approach.

Each school of thought has its downsides. Monoliths move fast and abstracting parts of the system later that arise as bottlenecks is a tried and true pattern; however, there aren't too many product / business folks who want to hear:

"Hey, we just built this great MVP for you. It probably won't handle significant load, so we're going to go off in a corner and make it do that now. Oh yeah, we won't have time to develop new features because we'll be too busy migrating tests and writing the ones we didn't write in the beginning."

The flip side is, microservice first has a lot of overhead, and (as things evolve in one system) refactoring can be extremely painful. This is an okay trade off where I'm at... for others, maybe not so much.

reply

[–] lisa_henderson link

Last year I worked at an electronic publishing firm which had wasted $3 million and 5 years on a Ruby On Rails application which was universally hated by the staff, and which we replaced with 6 clean, separate services. The problem with the Ruby On Rails app is that it was trying to be everything to everyone, which is a common problem for monoliths in corporate environments. But the needs of the marketing department were very different from the needs of the publishing department. A request for a new feature would come in from the blogging/content division which would be added to the Ruby On Rails app, even though it slowed the app down for everyone else.

Six separate services allowed multiple benefits:

1.) each service was smaller and faster

2.) each service was focused on the real needs of its users

3.) each service was free to evolve without harming the people who did not use the service

There was some duplication of code, which suggests a process that is the exact opposite of "Monolith First":

Start with separate services for each group of users, then later look to combine redundant code into some shared libraries.

reply

[–] rukuu001 link

Here's Matt Ranney talking about how Uber's microservices-first approach allowed them to scale their workforce super fast; also how those microservices became a kind of decentralized ball of mud:

https://www.youtube.com/watch?v=nuiLcWE8sPA

reply

[–] Havoc link

I'd say the more correct interpretation is "don't introduce the complexity of modularity too early"

reply

[–] garganzol link

Every one who eats the food from a thought leader like Martin Fowler eventually meets a trap. Shiny ideas "that sound interesting" are like a candle light for a moth.

I created a simple rule long time ago: <insert name of a "thought leader" here> last.

reply

[–] marichards link

Modular monoliths can be a simpler medium. Writing modules of functionality that work on their own (in memory integration test) can easily be tested, separated into microservices or assimilated into a monolith. Be wary of runtime function shared between modules as it will strictly couple the two and risk side effects on each other, tending towards spaghetti. But for monolith quick wins they can help for sharing management dependent resources like database transactions.

reply

[–] sctb link

Discussion from a couple of years ago: https://news.ycombinator.com/item?id=9652893.

reply

[–] rukuu001 link

You (and I) are almost certainly going to get it wrong first time around. Which approach is most forgiving of errors? I'd say monolith.

reply

[–] losteric link

When they realize business managers don't know jack about computers, and delegate more authority to engineers and/or hire product/technical managers.

Development processes and software architecture follow from business process and architecture... it's hard to be agile and develop services with clean separation of responsibilities when business insists on monolithic hairball project reqs with fixed deadlines.

(aka Conway's Law: https://en.wikipedia.org/wiki/Conway%27s_law )

reply

[–] y2hhcmxlcw link

I wonder at what point the financial pressure to stop designing bad software becomes so high that it overrides the political pressures that created the bad designs and practices? To a community like HN it's just normal every day thinking to design even at least a decent web application, but at some companies that's seen as either visionary and impossible or even immature. But at some point it seems there would be so much money on the line to trim the number of man hours going into maintenance nightmares they would fix it. I sometimes wonder if big companies will wake up to this across the country and there will be big lay offs becuase they adopt modern architecture and they don't need so many people? Does this seem feasible or will conways law hold even as financial pressure to do better starts to really go up? Or will the rewrites take even more people and therefore there won't be layoffs/pressure on job market?

reply

[–] losteric link

Well, Conway's Law states that code reflects the organization's bureaucracy... bad software means bad leadership and decision making, likely spread throughout the company. Companies will root out those inefficiencies if and only if they are doing poorly. Deep cultural changes are hard to drive if the company is doing relatively well, no one wants to take the "risk" of trying to improve.

reply

[–] acdha link

Remember Conway's law and look for the factors which lead to unmaintainable code: poor communications, inconsistent or conflicting management positions, unrealistic and unreliable deadlines, poor internal environments and processes, hiring and review processes which do not reward the right things, trusting consultants rather than staff, etc.

Nobody says they want to waste money building bad software. The problem is that the factors which ensure it are political and hard to change. It's easy to forget that since we tend to focus on the visible technical aspects but they're almost always a reflection of the environment.

https://en.m.wikipedia.org/wiki/Conway%27s_law

reply

[–] defined link

I suspect that it will be in a very long time.

For one thing, I've seen each generation of new managers forget or ignore the knowledge hard-won by their predecessors or the software body of knowledge. Oh, and ignore their current experts, too. I think it's a bug in the human wetware because it happens so often.

For another, it was explained to me when consulting at $giantcorp that sometimes - for example, regulatory compliance or competitive edge - it's more important to get a shitty monolith out there, bugs and all, by a drop-dead date than to save in the long run by doing a good job.

And as long as there are people out there willing to work for low wages fixing or rewriting the pile of crap, and it's profitable for $company, the practice will continue.

Or until someone can prove, with cost measurements on multiple large-scale projects implementing the same requirements, that the ROI - in bottom line $ terms - on a well-engineered system is much greater than the crappy equivalent.

I can't see that happening because of all the variables involved (team, skill, chance, cost, variance in interpretation of requirements, and so on), plus, who would pay for that?

EDIT - typo.

reply

[–] kid0m4n link

We took ur jobs????

reply

[–] y2hhcmxlcw link

At what point will corporations that still design massive systems as an unmaintanable monolith figure out they can architect things better and save a ton of developer dollars? At what point do they start taking good points from articles like this and either break those up into microservices or some other solution?

reply

[–] oDot link

Just wanted to point out that I was swyping, lol.

reply

[–] oDot link

There is a middle-aged, and it's building a monolith that's anticipated to being broken down.

reply

[–] stuartaxelowen link

I quite like the "web server and stream processors first" strategy, since it will take you much farther and retain the same code efficiencies as the monolith, but will also give more operational efficiency at minimal extra cost.

reply

[–] holografix link

Monolithic 12-Factor apps where you can abstract some of the requirements to managed services, like a DB service, an email service etc. Someone already mentioned here but stateless app processes is a must.

reply

[–] invisible link

One issue with this approach is that refactoring can cause bugs or a change in behavior so you're risking bugs twice (initial write, then when doing refactor+teats). If you could guarantee the code was testable from the start then maybe this would help with the approach you outlined.

reply

[–] jaxn link

I think the same argument should be made for NOT writing tests for a prototype.

Build something useful, fast. Then refactor. Write tests when refactoring or fixing a bug, but not when prototyping.

reply

[–] acdha link

He's smart and experienced. That doesn't mean he's always right but I would consider what he says and reason through disagreements. Most commonly, I find the right/wrong arguments are actually reflecting the fact that the underlying environments aren't as similar as they might seem at first glance. Someone giving advice based on working on an F500 team with 200 developers will have seemingly-bizarre priorities to a 5 person startup, just as advice from someone at Google used to handling multiple orders of magnitude more traffic is likely expensive overkill for that small team.

reply

[–] eropple link

I agree with this. I used to be a lot more down on his work, but it wasn't his fault so much as all the wannabes who bought it uncritically. (Much the same as stuff like Kubernetes--you aren't Google, they have Google problems, you don't have Google problems, stop automatically adopting Google solutions to problems that are a-web-server problems.)

reply

[–] guscost link

> you aren't Google, they have Google problems, you don't have Google problems

Don't underestimate how hard it can be for developers to accept this. The hacker community can become a tedious game of one-upsmanship at times, and it's way too easy to slip into "imposter syndrome" mode. Often, the people barking loudest about the newest ideas have slipped themselves, and are just trying not to appear clueless.

But cluelessness is fine. It's the default state of being, we all need to be comfortable (if not satisfied) with it.

I don't want to suggest that Martin Fowler is clueless, of course. He has described many prudent battle-tested techniques that can be absolutely essential in context. If you haven't seen his article on Collection Pipelines, it's relevant to all kinds of modern programming: https://martinfowler.com/articles/collection-pipeline/

reply

[–] acdha link

I really strongly agree with your point about uncritical thinking. Our industry seems to be especially prone to dogma and so many arguments seem to be people not realizing that they're talking past each other.

reply

[–] lmm link

Doesn't know what he doesn't know. There's some good advice among what he says, but he also generalises far beyond the limits of his experience.

reply

[–] sotojuan link

Probably a better source of information than the trendy blog posts written by 20 somethings we get here.

reply

[–] maxxxxx link

I think he often states the obvious but gives it a nice new name.

reply

[–] pgwhalen link

That's a good way of putting it, and I think he would actually agree with you on that. I was at his "Event driven architecture" talk at Goto Chicago where he talked about four patterns that can show up in something you might call event driven. He didn't presume to be inventing anything, just categorizing some patterns that have been recognized before, with the intent of clearing up some of the haziness/hype that occurs when someone says they have an event driven architecture.

reply

[–] stuartaxelowen link

Sometimes right, sometimes wrong. I thought his position on TDD was a little impractical, but I wouldn't hold it against him.

reply

[–] a_imho link

Might be OT, but what is the opinion on Martin Fowler in general?

reply

[–] tomerbd link

There is a really interesting discussion here, but I need to quit my day job to read it all :O

reply

[–] undefined link
[deleted]

reply