In a previous lesson we created the Contoso1.xlsx workbook by setting up a basic spreadsheet and using the PowerPivot add-in to read in about 3.4 million fact rows plus some related dimensions.
With that workbook we could continue on and querying it using the single user mode with Excel. The next thing we did with it was to publish it to SharePoint so we could share it with others.
Now we’ll bring that same workbook into a more sophisticated, professional Analysis Services development environment.
The first question is, “Why would we want to do that?” The reason we would is because it’s much more scalable, we can use even more memory, even more features–such as Partitioning and security roles. There are a number of new things we can do once we move this into a database server environment.
That’s what we’ll do next. The nice thing about this process is that it’s really, really easy. If you can imagine you might have power users out there that are creating their own PowerPivot for Excel workbooks. They’re doing a lot of research and modeling and creating their own calculations. They’re getting a lot of our traditional design work done for us.
We can take what they started and bring that right into a large scale database solution, which is something we haven’t been able to do before. Let’s look at how easy that is to do.
I’m running my SQL Server Data Tools in the Visual Studio 2010 environment. I’m going to create a new SSAS project. I have a project type called Import from PowerPivot, which I’ll select.
I have to give it a name, so I’ll use the same name I’ve been using: Contoso1. The default folders are fine.
What it will do is ask me to select the spreadsheet I started with, and it’ll read it. I select Contoso1 and click “Open”. From there the project wizard will do virtually everything for me. I just wait, and at the bottom of the screen I can see the status as the project is created.
The next thing the wizard does actually read the data from the workbook into the model. If you watched the lesson on using PowerPivot for Excel, this looks very familiar. It’s really just a blue colored version of the Excel add-in for PowerPivot.
We still have our tabular view, and we still can switch between the tabular and E-R diagram view. Other than some navigation changes, though, it’s basically the same design environment.
There are some new features that we’ll go over in the future–like roles and perspectives. There are more things we can do in this Visual Studio environment to enhance and extend this solution, but you can see the point I’m making is that the experience of modeling data and bringing it in is almost identical between the Excel and Visual Studio environments.
I’m going to do one thing before I actually publish this to a server. I’ going to add one measure. This will be our first measure. I’m going to use an AutoSum feature to add a measure called [Sales Amount]. We’ll get into this in more detail later, but for now I’ll give this measure an easier name–[Sales]
That’s all it really takes to create a named measure within this design environment. Before I deploy this I’m going to test and look at the model. Currently I’m working locally on my own developer workstation .
In previous versions of BIDS, to browse a cube we would have to process the cube, and then use a built- in cube browser. That cube browser was nice, but had a different experience than users in tools like Excel.
Now in the development environment when you browse your model to see what it looks like for the end-users you’ll actually use an end-user tool like excel. As you can see this is just a regular browsing experience. I can choose dimensions and measures and heck to make sure this is OK.
I’m running my development environment with a local instance of SSAS for Tabular, and this Excel connection is actually to my local workstation model. Notice the definition doesn’t point to a database server, but points to my local environment.
Unlike in multidimensional development, you didn’t see data immediately as you modeled data. In Tabular mode you will see the output of your data modeling immediately. The biggest thing to think about is that if you’re using a local instance like I am, you really can’t be developing a model with your full data set.
One we’re happy with this design–and we are for now–we need to tell the project where it should be deployed. I’m deploying this to LABSQL1 again. The database name is fine. I don’t need to make any other changes.
Now I’ll right-click and deploy. The default is that when we deploy a database it will automatically process. I left that setting intact, so when I deploy it automatically processes. While that’s happening, lets explain it.
Over on the server the database was created and the serer is now running queries against the source data. Processing is querying data form the remote data warehouse database at this point.
Our deployment process is completed, and the 3.4 million rows are loaded. If I want to test that, I can do that by creating a new Excel workbook and using the data/from other sources wizard to connect to the SSAS data source on the remote serer where I deployed the model.
I can do the same kind of querying that I would locally to make sure that everything is working. So what we’ve done is to create a database in my local development environment, and then deployed the solution to a server, which created a remote copy on a remote server.
I could just use the remote server as the design environment if I wished as well. However for performance reasons I choose to do it this way and I think most people will choose to do that.
When I deployed it, I deployed it to a remote server.
So that’s how you take an existing Excel workbook with PowerPivot and promote it into a server based solution for SSAS.