UPDATE 2022-01-12: So it seems AWS backtracked on this slightly, having changed to simply preventing the creation of new or changing of existing ds2 clusters post 2021-12-31. Users on ds2 already can continue to use them, but are advised to switch the dc2 or ra3.
A couple of years back, following a number of updates to Redshift that were available on all node types except for the ds2 Dense Storage node type, I speculated AWS might be planning on retiring the ds2 node type. It took a lot longer than I expected, but last month AWS quietly let users know they are indeed retiring the ds2 Redshift node type, and by the end of 2021 no less.
This was communicated directly to any impacted users at the start of June this year, but doesn’t seem to have been widely announced on the AWS blog or anywhere else. The text of the message received is below:
Hello,
We want to inform you we are disabling the ability to create new DS2 clusters as of August 1, 2021. And, will be deprecating the DS2 node type as of December 31 2021.
It’s an aggressive timetable, and it shows AWS are really keen to retire the old guard so they can focus on the new generation of RA3-based Redshift instances which offer greater separation of storage and compute to allow them to better compete with the likes of Snowflake.
It’s been a long time coming. I noticed a few things back in March 2019 where new Redshift features were only being released on compute-optimised node types and not DS2s. It was pure speculation at the time, but it’s been a continuing trend as AWS have continued to add features to Redshift that focus on separating compute and storage, and increasing the flexiblity of the product.
The release of RA3 node types at the end of 2019 was the beginning of the end for the venerable DS2, but given the substantial cost of the available node types, RA3 wasn’t a viable option for everyone, with the smallest option still being around 4x the cost of the cheapest DS2. Enter the horribly named ra3.xlplus
. AWS are no strangers to random and ridiculous naming. Just ask @QuinnyPig, whose favourite hobby is dunking on AWS service names. But, they had a well established naming convention with Redshift node types and they just decided to tear it up.
In saying that though, the release of the ra3.xlplus
node type introduced a viable migration path for many DS2 stalwarts, and AWS were quick to encourage users to make the switch.
Redshift’s tight coupling of storage and compute has been a problem for many years when compared to its key competitors like Snowflake and Google’s BigQuery, both of which scale compute and storage independently. This tight coupling has made it tricky to scale Redshift as quickly or cost-effectively as their competitors and has definitely resulted in AWS losing data warehouse customers to their competitors. I’m in the process of multiple Redshift to Snowflake migrations myself.
But in taking an aggressive stance here and helping customers migrate to the new RA3s, AWS is really pushing to get onto a more or less level footing with their competitors. A foundation on which they can really build Redshift into a core part of their analytics service offering and a genuine contender for a modern data stack. And with AWS’ ability to integrate their other services like machine learning, serverless functions, and managed ETL via Glue, they’re going to be in good shape to return to the cloud data warehouse marketplace.