PASS 2008 Keynote, Part 2: Kilimanjaro, Madison and Gemini

Ted Kummert opened by reiterating the message of Microsoft’s increased level of support for the PASS conference, and introduced members of the SQLCAT team who were in attendance. Ted’s data storage division needed to consider four main “pillars” of development:

  • Enterprise data platform ­ — Ted mentioned security advances and a few other things I missed. Ted is not as captivating a speaker as Wayne.
  • Beyond relational ­ — supporting new data types such as spatial, filestream column types to deal with large volumes of unstructured data
  • Dynamic development ­ — across VS 2008, .NET 3.5 and SQL 2008 introduced entity data model top allow you to deal with code in terms of real business objects (customers etc).
  • Pervasive insight ­– getting business value out of data via advances in their BI and data warehouse capabilities.

Ayad Shommout , a lead technical DBA for CareGroup, came on stage to talk about their experiences moving to SQL Server 2008.

It was disastrous!!!

No, only joking. It was completely seamless…25% performance increase without changing any code, enhanced productivity and great features…the three singled out for special mention were policy-based management, transparent data encryption, and auditing.

Ted moved onto expand on the success story of SQL 2008:

  • 1 million downloads of SQL 2008 RTM
  • 2500 partners offering solutions
  • Leading performance benchmarks
  • Fewest vulnerabilities
  • Fastest adoption.

The figures sound impressive, but where is the stampede of eager SQL 2008 adopters? My impression so far is that people are in no hurry to move over despite the fact that, as I previously reported, there are smaller barriers to migration this time.

SQL Server 2010 (Kilimanjaro)

So where is SQL Server going with Kilimanjaro? In five words: scale up and scale out. Ted spoke of the “innovation” coming in the first half of 2010, with Killimanjaro, and this presaged 30 minutes of quick-fire demos that introduced a lot of new projects in a “high impact, low detail” kind of way, with new acronyms flying thick and fast..

Scale out: Project Madison

Jesse Fountain talked about scaling out, in the form of the Data Warehousing project, “Madison”, based on acquired technology, DATAllegro.

In 2 weeks, generated 1 trillion rows (150 Terabytes) for the database and then distributed that data across multiple nodes. He demonstrated monitoring 25 nodes, each with 8 cores, and the speed of executing reporting queries against this vast amount of data, in this “scale out” architecture.

SQL Server Fabric

The next theme was manageability and what was coming in Kilimanjaro to improve this. On came Dan Jones. To address multi-instance management, they were introducing SQL Server Fabric, with a Fabric control point to make it much easier to manage at large scale.

Dan opened Fabric Explorer from Management Studio, providing storage utilization history on an instance-by-instance basis. On each instance can define policies to define over and under-used resources.

In Visual Studio, he demonstrated how to generate a “DAC Pac” which defines the schema, but also deployment requirements. The DBA imports this into an application library, can refine the policies at that stage, and deploy it to a Fabric.

SQL Data Services

Ted introduced SSDS as the data platform element of Azure Services, extending the data platform to the cloud. First public CTP of SQL Data Services is now available and he encouraged everyone to check it out. And that was that on SSDS! I was wondering what was up with SSDS before this, and I’m definitely none the wiser now.

Managed self-service BI reporting: Project Gemini

Sounds impressive, eh? Building on Report Builder, Kilimanjaro introduces Gemini, “an in-memory, componentized model for building reports”….a merging of the capabilities of Analysis Services and Excel. Donald Farmer came on stage sporting angelic wings to demonstrate.

Things were moving so fast at this stage, that it was hard to keep up. Donald showed how to “mix and match” data from the database with locally stored data from Excel, sorting and filtering 10 million rows of data in Excel in seconds, performing pivot table analysis and various ways to visually filter the data, and then publish to SharePoint for public consumption.

It definitely looks very impressive, and is something we’re gong to hear a lot more about. And doubtless it will keep consultants happy and occupied for many years to come.