Oh my, this truly is my favourite improvement to the SQL Server family for some time. When this first landed in SQL 2012, I immediately started talking to new customers about adopting this into their HA solution but also started planting the seeds with existing customers about migrating to AlwaysOn from their existing HA solution. For anyone who has had the pleasure of designing and implementing AlwaysOn, you will already know how awesome it is. The are some improvements (as we’d expect) and chief among them are the increased limit of read-only secondary from 4 to 8. Note though, that for designs where the business states it must be cost considered, every read-only secondary still needs to be fully licenced.
Always a good topic if you fancy striking up a conversation with a DBA; where to create the statistics? I’m about to start diving into a debate or hammer out the pros, cons, rights, and wrongs on my blog, but I will state that SQL 2014 creates these at the table-level but then maintains them at the partition-level.
There isn’t much to say here really, so I’ll just simply give you a little food for thought. You can already throttle CPU and memory, right? Do you want to throttle I/O too? You heard me. If you’re nodding, then you need SQL 2014.
Online DB Operations
SQL 2012 made some good progress with Online Database Operations but in SQL 2014, we now have Online Index Operations at the partition-level – which is great news for those with Very Large Database (VLDB) requirements.
Yes, It’s finally here! I’ve been sharing this information for some time now and Microsoft have now given us the ability to backup our files directly to Azure. Sure, it introduces latency overheads and there is no direct access to the backup files (which may be cause for concern to some organisations). However, take a closer look at those RTO and RPO objectives, if the environment permits this setup then this could be a good solution. I managed to count only a handful of customers I’ve worked with in the past that could potentially benefit from Azure backups, but that’s more than enough for me to start talking about it. One thing that I should point out though is the Smart Backup feature, which takes backups to Azure whenever necessary. This effectively means that rather than the recovery strategy being defined, SQL Server does this for you.
As with the Azure Backups, we can also ship the database and transaction log files to Azure too. In my humble opinion though, this feature should be approached with an err of caution. Some SQL environments simply might not cope with the increase in latency and performance might be considerably less than desirable. That said, this really does depend on the environment and requirements; Azure storage could be much cheaper than local SAN-based storage.
Well, the biggest discussion point I can see unfolding (certainly during the early adoption periods) will be geared around the introduction of Azure elements; backup and storage. Both might sound a little bonkers at first, but consider customers who have a long term strategy to migrate to cloud-based solutions. This could be the organisation’s first real glimpse of the overall performance, reliability, and stability to which those metrics can be based on. Essentially though, there is a clear message here from Microsoft that SQL Server is now well and truly ready for a hybrid SQL solution, which to me says one thing: we are being offered more scalability options. Who wouldn’t want that?
If I get chance over the next week or so, I’ll do my very best to post a “hands on” with SQL 2014 so you can see it in action.
I’m going to finish off with a little advise about where I see the initial primary cause for ill-designed solutions will reside; latency. SQL 2014, for all it’s improvements and awesomeness, does allow for some severe latency issues where features are either miss-understood or miss-interpreted. If you’re unsure, call in the experts… obviously, that’s risual 😉