Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

If I'm reading the pricing page correctly, DocumentDB would run a _minimum_ of $200/month. That's for the smallest instance and no storage or I/O. Kind of steep if you ask me.


We were paying $5k a month for Atlas. So while it's not 'cheap' for a hosted solution it's cheaper. And the autoscale and RR is better DR is super configurable. And then there is this line.

'Together with optimizations like advanced query processing, connection pooling, and optimized recovery and rebuild, Amazon DocumentDB achieves twice the throughput of currently available MongoDB managed services.`


Can you please elaborate? This was launched today, you had access to the new feature in advance?


MongoDB Atlas is the name of the cloud service run by MongoDB themselves, with which Amazon DocumentDB competes. https://www.mongodb.com/cloud/atlas


Not OP, but that wouldn’t necessarily be a surprise. As customers make product requests to AWS they can be tapped to test upcoming launches - anything from pre-release testing to very early alphas.


I can confirm that this is very much a thing that they do. We have an account manager able to bump feature requests over to the appropriate product managers, and have been involved in pre-release testing of features that we expressed interest in.


As others have said we had MongoDB Atlas. And it is basically mongodb ran in aws with a pretty interface to do basic things like whitelist ips and another such functions.


Yeah, that's pricy. They're definitely not going after early-stage startups then.

But if you have a medium-sized data set (eg. 50+ GB), this is definitely competitively priced. More RAM, storage, compute than Mongo Atlas and Compose for less money.

Here's hoping they introduce cheaper options!


50GB is a really small data set.


Eh, by what measure? Realistically it's probably bigger than 90% of all Mongo datasets.

It's tiny if you're a massive company and it's massive if you're a tiny startup.


50GB easily fits in RAM. It's a small dataset.


If you can run the dataset comfortably on a Macbook then it's very, very small.

Heck, you can even just use grep over 50GB reading straight from disk. It's tiny.


Is an argument based on the premise that relative terms have absolute meanings a good use of people's time here?


A recent work Slack chat had a dev asking what a particular table contained. They were going through our data inventory and found a randomly-named table 18TB in size. When I ran "select count()" against it, I got back 5,325,451,020,708 rows (that's a copy-and-paste).

50GB isn't trivial, but it's utterly manageable.


It seems a bit wrong if you have a 18TB table but no idea what it contains...


It was a temp table that we hadn't garbage collected yet. We don't make a habit of leaving that much junk data around, but it bumped our monthly storage bill several percent, not like tripled it.


Was this a relational or NoSQL DB?


It's primarily in things like Spark and Snowflake that act like relational DBs as long as you squint the right way.


in my experience it qualifies as "medium"


If it can be stuck in a sqlite database and run on a developer laptop, then no, it is not medium by any standard.

Please elaborate why you think 50Gb is anything other than a small dataset that can fit in memory on any half-decent server though.


[edit] in the spirit of not being a condescending tool to you, i'll replace my original reply with this: https://en.wikipedia.org/wiki/Long_tail


I'm assuming this is a joke. You can run databases that size without any of the fancy scalability stuff - no sharding no anything. I'd actually recommend that, it's makes admin super easy!


Besides that AWS will charge per transaction (at 0.2 per million) outrageous given that you already pay per instance.

Correct pricing strategy needs to be per request or per instance, AWS is charging for both


I would guess the pricing model is actually closely related to the main dimensions of their costs and is quite valid.

The key point is illustrated by this quote from their main landing page: "storage and compute are decoupled, allowing each to scale independently".

This suggests it is built on top of the Aurora storage layer, or something similar, as other comments have suggested. This means there is a real cost per I/O operation because you aren't limited by the physical hardware of the compute instances, you get "free" storage nodes underneath that do much more than traditional storage and thus have to be built into the pricing structure.

It is definitely not going to be the cheapest possible solution for all use cases, but do the math before you reject it. If it does follow the Aurora pattern, then the number of I/O operations you are billed for will be a lot less than you may think because, to use another quote from their product page, "Amazon DocumentDB reduces database I/O by writing only database changes to the storage layer, avoiding slow, inefficient, and expensive data replication across network links". I think that quote is harder to understand without background as it sounds like market speak, but lines up very well with some of their in depth Aurora whitepapers, such as https://www.allthingsdistributed.com/files/p1041-verbitski.p... Again, I haven't seen evidence this is based on Aurora but the details they talk about line up really well.


The correct pricing strategy of any product is "whatever the customer is willing to pay for it". If you feel the price is too steep for your use-case, then don't buy it.


That's rarely actually true for anyone that wants to operate for more than a short time period. There are significant costs to gouging your customers. Anything from it being illegal, to it encouraging competition and your customers being motivated to actively flee you and shit on your reputation. The correct pricing strategy for people that don't have a long term enforceable monopoly is "whatever most customers are willing to reasonably happily pay"


The minute that you have customers paying any amount at all, you set yourself up for possible competition undercutting you on price. The truth is, whether you have a great or poor relationship with your customers, unless you have legal protections you have very little control over whether competitors will eventually enter your market or not. So you need to always operate as if there is competition breathing down your neck.

Pricing strategy has little to do with customer happiness in aggregate. Every price will make some customers happy, and other customers feel gouged, because different customers extract different amounts of value from your product. The key to protect yourself from competition isn't to spend time worrying about how pricing affects your aggregate customer volume, but about whether your customers are happy. Maybe some customers are unhappy because they feel gouged. Maybe you could make them happier by reducing prices. But maybe, you're better off letting them go, if they represent a small minority of your users, and instead focus on what a majority of your users might appreciate more - better service, relevant features, etc. which make them happier.


I think you've hit on what always bothers me about this sentiment. It is obvious that at any point in time you can charge the maximum customers are willing to pay, but that allows for disruption through the channels like competition. The opposite where you charge the minimum to continue providing the goods or services seems optimal, though, leads to a company with zero profits that is unattractive to investment. Is there any literature on how to identify the optimal point of "whatever customers are willing to reasonably happily pay"? Businesses successfully exist on many points in the spectrum of zero profits to most profits the market will bear, but I'd be interested in anything discussing optimality.

[Edit] Amazon employee working in Physical Consumer (not AWS). Asking out of personal curiosity.


I'm not an economist, and can't point to to anything in particular, but I would be skeptical of anything that claimed a general approach to that. "Optimal" depends entirely on what you're optimizing for, which is basically an infinite possibility space. I could need a significant amount of revenue immediately to accomplish a desired business development, or I could have plenty of cash and want to build a large and loyal long term customer base at the cost of immediate profit. As you say, successful businesses exist doing pretty much everything. The only limiting factor is being a viable ongoing concern (and that can just mean having a rich backer). I'm sure there are things discussing optimizing based on small slices of the possibility space though (but all the normal caveats about economists making dumb assumptions that rarely apply to humans apply even to those).


> Amazon employee working in Physical Consumer (not AWS)

You too? I'm in AFT. I posted the original "whatever the customer is willing to pay" comment. Mostly just offhand and yeah there's a lot of nuance to it.

I don't mean that anyone should want to individually gouge each customer, but when running a business one should pick a price whereby the total long term profit is maximized.

Your pricing determines the number of customers. Your pricing also determines the profit on each customer. But choosing your pricing strategy correctly, you should have some people who won't buy your product.


>> "whatever most customers are willing to reasonably happily pay"

Do you have a better idea of what this is then they do?

Considering they already have launch customers actively using this product and there are several comments on this page saying pricing is better than MongoDB?


AWS very often charges across multiple axes. I tried to model out our Cloudfront charges and they charge there for 3-4 different factors, each of which varies in pricing by region.

I think the idea is that by charging precisely where they incur costs, they can be much more reactive to different usage patterns, and therefore be more competitively priced overall.

Although it certainly does create lock-in due to not being able to figure out your billing and accurately model alternatives.


Seems to be really aimed at businesses which want to get off of MongoDB desperately.


Not really at all.

It's targeted at enterprises like mine who currently use MongoDB on premise and are looking for a managed solution. The advantage of AWS over Atlas is you can use the same security and governance approaches e.g. IAM policies, ADFS/SAML integration, Cloudwatch/Cloudtrail etc.


Exactly- atlas kills me without any type of SSO options for the control plane.

Also I feel that they HAD to offer this to counter Azure CosmoDB


So they’ll have a huge target market.


I’m not a fan of mongo, but I’ve run a fairly sizable enterprise platform on it for nearly a decade and we haven’t had any major issues that would make replacing it an urgent desire.


Once there is AWS version, it seems like a matter of time before it becomes the safe choice. Nobody got fired for using AWS.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: