Table of Contents
- Full Analyst Insight Video
- About Enrico Signoretti
- About GigaOm
All organizations are looking to the cloud for their compute and data storage needs. Amazon AWS is, by far, the current market leader in terms of revenue as well as number of customers. And its service ecosystem is the most complete. One of the most successful services in its product portfolio is Simple Storage Service (S3). The first service launched by Amazon AWS in 2006, S3 is an object store and, as such, has durability, availability, and cost that is better than any other form of storage offered by this provider.
Glacier, Amazon’s archival object storage service, has an even better price point but is intended for cold archiving only. S3 is also the name of the Application Program Interface (API) for accessing data in this system, and is widely considered the de facto industry standard for object stores.
Object storage is perfect for many use cases and applications, both in the cloud and on premises, and is becoming a very popular target for backup, archiving, content management, file-based applications, big data lakes, and so on. In other words, a large part of applications that deal with unstructured data can easily take advantage of an object store, and this is why it is becoming so popular.
As already mentioned, Amazon S3 is relatively inexpensive compared to other storage options available from Amazon AWS but, on the flipside, its performance is not always consistent. Even more so, the real cost of the service could become an issue for some customers. In fact, the S3 pricing model is quite complex and depends on several factors:
- Type of data protection
- Data locality
- Storage tier
- IO operations
- Data transferred out of AWS (egress)