The solution has actually been split into two products: SQL Server PowerPivot for Excel
Previously, end-users were forced to rely on IT when they wanted to do analysis or build applications with familiar tools like Excel, said Herain Oberoi, group product manager with the SQL Server business group at Microsoft. "In order for [end users] to do what they wanted, they had to depend on IT, whether it was to get the data or have them build an Analysis Services cube. We think [PowerPivot for Excel] is the thing that's going to allow IT and end users to form that bridge," he added.
Oberoi said that PowerPivot for Excel is designed to allow end users to work with large amounts of data, aggregate from different sources and create what is essentially a Microsoft Excel file. The users can then share the information through SharePoint.
"Now you have a user-generated application that is actually fairly powerful and can be used by a large number of users, and it's published to SharePoint," Oberoi explained. "So, essentially, what you have now is this hosted solution that was never built by IT. That's the notion behind self-service BI."
Oberoi was quick to point out, however, that the concept of "self-service" business intelligence doesn't tell the whole story. "It's not just about self-service, it's about managed self-service," he said. "The IT side of the house will say, 'That's great that you are giving end users the ability to do all this stuff, but how do we manage all of this from an infrastructure level?' That's where PowerPivot for SharePoint comes in."
He said that the SharePoint integration is what gives IT the ability to look at the data to answer questions about server loads, hot-running applications and server capacity. "In this way, IT can manage these end-user generated apps. So end users have the flexibility they need, and it gives IT the management and visibility they want," he said.
What is Project Madison?
Now that Project Gemini has officially been dubbed Microsoft PowerPivot, database administrators and developers can turn their attention to another new SQL Server 2008 R2 feature: Project Madison.
Derived from Microsoft's acquisition of DATAllegro last year, Project Madison is a scale-out data warehousing solution for SQL Server environments.
Microsoft's interest in the product was rooted in its ability to allow customers to build data warehouses in a higher terabyte range. "We've always had scale data warehouses with SQL Server in the 10s of TB range, but we've never really competed in the 100s of TBs," Oberoi said. "There are not a whole lot of data warehouses in that range."
He added that the company hopes to bring a low total cost of ownership (TCO) value proposition to the market where "it's not just about being able to build really large-scale data warehouses, but doing that while still maintaining that low TCO model that we provide."
Microsoft designed Project Madison to improve performance while reducing the cost per terabyte for customers. A big part of this improved scalability and performance comes from the product's massively parallel processing (MPP) architecture, which allows queries to be distributed across multiple threads, rather than single threads. Microsoft's website describes the MPP architecture as an "enterprise hub" which helps to avoid bottlenecks by evenly distributing data across multiple nodes.
Project Madison will integrate with Microsoft's business intelligence tools, including SQL Server Analysis Services (SSAS), Reporting Services (SSRS) and Integration Services (SSIS).
Like PowerPivot for Excel and SharePoint, Project Madison is expected to roll out in the same time frame as SQL Server 2008 R2, which is set for general availability in the first half of 2010.
Check out more preview coverage of PASS Summit 2009.