Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I work at a microservice company. An example microservice is our geoservice which simply takes a lat/lon and tells you what service region the user is in (e.g. New York, San Francisco, etc..). You can see how dozens of these services might be needed when handling a single request coming in from the front end or mobile apps. The service may eventually gain another related function or two as we work on tearing down our monolith (internally referred to as the tumor because we do anything to keep it from growing...).


This just sounds crazy to me. How often do the services that are responsible for a given region change?


Why would you make that a separate service when e.g. one query on a PostGIS table can do that?


You might want to add functionality such as caching or business rules.

A better question would be why not write a module or class? There a pros and cons to either, but advantages include: better monitoring and alerting, easier deployments and rollbacks, callers can timeout and use fallback logic, you can add load balancing and scale each service separately, it's easy to figure out which service uses what resources, it makes it easy to write some parts of application in other programming languages, different teams can work on different parts of the application independently as long as they agree on an API.


One query on a table is the exact same thing as a HTTP GET call on a service.


Strictly this is not necessarily relevant. You can easily roll that table hit into another db hit you were already making. Can't do that with services.


Yes, but instead of making it a whole new service, you are probably already using a database and can use that service for this functionality as well.

But since asking the question I've realized that if your application already needs a huge amount of servers because it simply gets that much traffic, then putting something like this in its own docker instance is probably the simplest way (it might even use postgres inside it), if those boundaries change now and then.

But most companies aren't near that scale.


You're both missing the point here. Both things are conceptually equivalent:

- Select(db, somekey, someparameters) [return some db object]

- http_get_query(http://service.com/somekey/someparameters [return some JSON]

They are external (micro)service:

- they both need the target system to be available.

- they both may fail in weird and unexpected ways.

- they both need to handle failure gracefully.

Their usage have different properties:

- A database call need to have a permanent connection pool to the database, usually requiring db user and password.

- A http call is just call and forget. It's a lot easier to use, in any applications, at any time.


I understand what microservices are, but I can't understand what 1700 pieces of unique functionality of Uber could be abstracted into their own services. I am struggling to even think of 100, so I was curious what exactly some of these things were, and how they structured things to need so many service dependencies.


It looks that for functions as simple as that the RPC overhead should be pretty small, or it will eclipse the time / resources spent by the actual business logic.

E.g. I can't think of a REST service doing this; something like a direct socket connection (over a HA / load-balancing level) with a zero-copy serialization format like Cap'nProto might work.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: