Monthly Archives: January 2011

SharePoint Event Handling – When do events fire?

There are a lot of different events that are exposed for us to hook our code into. Unfortunately, WHEN those events get triggered is poorly documented and inconsistent across the SharePoint framework. I wrote this article to list out the different event hooks available and to document my test results when I built some automation into a document library using event handlers. My goal was to have some code run on every document added to the library to modify the field values. I was surprised by how much work was involved in getting it to behave consistently. Hopefully the material here will save you from some of the suffering I went through.

Event Receiver Classes

There are several Event Receiver classes that you can extend from that each get called at different times:

  • SPEmailEventReceiver – This class exposes a single event, EmailReceived, that is fired when a list receives a new email item.
  • SPItemEventReceiver – Events that happen to items in a list or library (i.e. new item created, item updated, attachment added, file moved, etc.) This is the really important one for document library automation, and is the focus of this article.
  • SPListEventReceiver – Events that happen to a list (i.e. new list created, list deleted, field added to the list, etc.)
  • SPWebEventReceiver – Events that happen to the website (i.e. new website created, website deleted, website moved, etc.) and the site collection deleted events.

You can also directly add an event handler through the SPEventReceiverDefinition class, but this is usually not necessary for most common tasks.

SPItemEventReceiver Events

These are the events you need to hook into in order to perform any kind of automation related to documents being added to the library. There are about 20 events in the SPItemEventReceiver class that you could hook into, you can see the full list at http://msdn.microsoft.com/en-us/library/microsoft.sharepoint.spitemeventreceiver_members.aspx, but the main ones we are interested in are those that may get called when a document is being added to a document library.

  • ItemAdding
  • ItemAdded
  • ItemUpdating
  • ItemUpdated

Different methods of adding a document to the library

SharePoint supports several different paths for adding a document to a document library. Each of these paths triggers events in subtly different ways. I found most of the bugs I ended up working on with the system I built were related to events being called more than once or not being called at all depending on the method used to add the document to the library.

The different ways of adding a document to a library are:

  • Through the Upload/Add New Document feature in the web UI.
  • Through Office integration by using the SaveAs menu item to change a document’s location in SharePoint.
  • Using non-Office programs through the SaveAs menu item to change a document’s location in SharePoint.
  • Through a mapped drive or explorer view, either copy and paste, or directly drag a file into the explorer window.

The Testing Setup

I built a stub event handler class that extended the SPItemEventReceiver and logged a message for each event handler that was called. I wired it to my document library and then tried adding documents to the library in various ways to see what got called.

The Results

Here are the different sets of events that were called for each test.

When using Office Word SaveAs

ItemAdding – ItemAdded
Office applications seem to have their own integration that avoids calling the ItemUpdating/ItemUpdated methods.

When using NotePad SaveAs

ItemAdding – ItemAdded – ItemUpdating – ItemUpdated – ItemDeleting – ItemDeleted – ItemAdding – ItemAdded – ItemUpdating – ItemUpdated
This was one of the more surprising results for me. I have no idea why the system deletes the first item and then re-creates it. My guess is that it has something to do with how Windows saves things to network drives.

When using MSPaint SaveAs

ItemAdding – ItemAdded – ItemUpdating – ItemUpdated – ItemDeleting – ItemDeleted – ItemAdding – ItemAdded – ItemUpdating – ItemUpdated – ItemUpdating – ItemUpdated – ItemUpdating – ItemUpdated
Yes, It actually called ItemUpdating/ItemUpdated three times when saving from MSPaint. I included this to demonstrate the point that the events called are different depending on which program you use to save the file.

When using the WebUI Upload or Add Item buttons

ItemAdding – ItemAdded
After these initial events you get the item properties screen where the user can enter values for the fields in the library. Then the next sete of events fire…
ItemUpdating – ItemUpdated

Windows Explorer Copy-Paste or Drag-N-Drop

ItemAdding – ItemAdded – ItemUpdating – ItemUpdated – ItemUpdating – ItemUpdated

Lessons Learned

In my particular case, if I set the field values during ItemAdding/ItemAdded, then the user had the opportunity to override those values when using the WebUI to add a document. However, if I set the values during the ItemUpdating/ItemUpdated events, the values didn’t get set at all when a user saved a file with an Office application. I came up with two solutions to this issue:

Force ItemUpdating to be called.

By adding a properties.ListItem.SystemUpdate(false) to the ItemAdded event, you can force the framework to invoke the ItemUpdating/ItemUpdated methods … even when MS Word is used to save the file. Then you just set the field values in the ItemUpdating method as usual.

Modify the Item Properties Form

You can override the Edit Properties form on document library items so that it doesn’t allow the user to set values for the fields you want. Then the user can’t modify/override the values you set in your code during the ItemAdded event.

Advertisements

Creating a Time Dimension

In my last blog entry I explained how to add time intelligence to your PerformancePoint dashboard. That blog was based on the assumption that you already had a time dimension created in your SQL Server Analysis Services cube. In this blog entry I will go over the different options you have to adding this time dimension to your cube. In addition to this, I will provide some design tips with time dimensions in general, and tell you how your decisions made during the design phase can affect how you can use the time dimension in your dashboard. There are two options available for adding a time dimension: Creating the time dimension manually, or creating the time dimension automatically with Business Intelligence Development Studio (BIDS). While the second option looks a lot more tempting there are pros and cons to both.

As you will see, allowing BIDS to create the time dimension, while providing you with essentials, also adds a lot of fluff to the underlying time table. It adds a lot of seemingly redundant columns that may fall under the category of “too much information” for your situation, and what you end up with is a very bloated table. The other downside is that you have very little control over the naming conventions of the different subdivisions. For example, the default naming convention for a fiscal year is of the form “Fiscal Calendar <year>”. If this doesn’t suit your needs, your only option is to create a script that manually changes all of the values in your time table. So while doing this through BIDS automates the process, you have little to no customizability short of scripting the changes in the table after the fact. On the other hand, creating the time dimension using this method takes a matter of minutes as opposed to hours, so it is definitely the user-friendly option for people who want to get a time dimension up and running quickly.

In the first section, I will go over the process of designing the underlying time table yourself in Microsoft SQL Server. In the second section, I will go over the process of setting up the time dimension in BIDS after you’ve created the time table in your database. In the final section, I will briefly go over the wizard that BIDS provides for automatically creating the time dimension.

Designing a Business Intelligence Time Table

The basic idea of a business intelligence time table is to provide a row for every date within the time period that you want to cover on your dashboard. For example, if you have been collecting data since 2004, then you need to have a row in your time table for every day since 2004 plus additional rows for every day that you want to collect data for in the future. A good range for the above example would be to have a row for every day between January 1st, 2004 and December 31st, 2020. This range will be the window of time that you will be able see data for in your time-aware dashboard, so make it big enough so that you don’t have to extend it too often.

Besides having the actual dates in your time table you will want to have columns that will allow you to split the dates in to their respective months and years. The time intelligence in your dashboard is only as smart as the data you give it, so don’t be afraid to add as many subdivisions of time as possible. For example, if your company’s fiscal year doesn’t start on the same date as a normal calendar year then you can add columns to indicate which fiscal year the date is part of. The same goes for the quarters and semesters of fiscal years. A good minimum for subdivisions would be to have the date, followed by its month, followed by its year.

In order to have a useable time table it is highly recommended to have both a numerical key as well as a textual description for each of the subdivisions. By this I mean that in addition to having a column for the number of the month, we should have another column that gives the textual description of that month. For example, in the row for March 5th, 2010 we would have a column that indicates that this date lies in the 3rd month of the year, as well as another column that indicates that the month is called “March”. This example is shown in the image below.

Time Table Row

Having your time table setup like this will allow SQL Analysis Services to know what order to put your months in, and will also allow you to put a label on the month instead of just a simple integer.

Your time table can have very simple subdivisions (e.g. day, month, year, etc.) or it can have very complicated subdivisions (e.g. day, month, quarter, semester, fiscal year, etc.). It all depends on how you want to slice your data on your dashboard. An example of a complicated time table can be found in the Adventure Works database. The UML diagram for their table can be found below.

Time Table UML

As you can see their table even includes localization information for each of their subdivisions. Another thing to notice is that there is even a subdivision for weeks. This is a weird subdivision because there is no way to know whether your month or year subdivisions will start at the beginning of a week or somewhere in the middle of a week. This means that the subdivision for weeks will not fit in to a rigid calendar structure, and needs to be considered on its own. You’ll see this in the next step when we add the time dimension to the Analysis Services cube.

Creating the Time Dimension

The first thing you need to do in order to create your time dimension with BIDS is to add the time table to the data source view in your Business Intelligence project. To do this, you simply need to right-click the background of your data source view, click “Add/Remove Tables”, and move the time table over to the right side. Once you’ve done this, your next step should be to create some relationships between your time table and your fact tables. This is required for time intelligence to work at all with your dashboard, so make sure you perform this step. Any relationships that you create in your data source view will be carried over to your cube when we add the time dimension, so this is a good time to make decisions about how your facts relate to time.

Once all of your relationships are defined it is time to add the time dimension to your Business Intelligence project. To begin, right-click the “Dimensions” folder and click “New Dimension”. In the dialog that comes up, you need to choose to “Use an existing table” and then pick the time table that you added in the previous step. If you have your time table setup with a date key and a date description, then change the “Name Column” field to the date description column. Otherwise, the defaults should be OK. In the next step you will be asked to choose all of the attributes for your dimension. It is recommended that you leave out any description columns in this step, as they will be added later as name columns to their key counterpart. For example, if you have a “MonthNumber” column and a “MonthDescription” column, only choose the “MonthNumber” column to add at this point. You can also optionally indicate the type of data that each attribute holds. An example of how this window should look after everything is selected is shown in the image below.

Calendar Attribute Setup

The next step of the process is to pick the name columns for the “key” attributes that you picked in the previous step. In order to do this, you simply need to click on the attribute in the dimension designer and modify the “NameColumn” property to point to the column that contains the textual description of the key column. This name column will be the label displayed to the user when they browse your calendar hierarchy.

In a rigid calendar structure it is important that you define how the different subdivisions relate to each other. This can be accomplished by defining the proper key columns for your attributes as well as defining the attribute relationships in the time dimension. When defining the key columns of your attributes you need to define them in a way that they can be uniquely identified in your time table. For example, for the “Month” attribute, it is not enough to have your key defined by the month number. This is due to the fact that the month number repeats itself every year, so your “key” to the month is not actually unique. The solution to this problem would be to add the “Year” attribute as a secondary key. Since a particular month will only show up once per year, this key definition is satisfactory for uniquely identifying each month in the time table. Another example for a key definition is found in the image below. This is the key definition for the “Calendar Quarter” attribute. You could read this definition as follows: “The calendar quarter only comes around once per year”. Once you have defined a key as a collection of multiple columns, you will also need to set the name column for that attribute. This can simply be the source column of the attribute if you don’t have a description column defined for it. After this is done you are then ready to start creating the attribute relationships.

Key Columns

To define the attribute relationships you need to go to the “Attribute Relationships” tab of your time dimension. By default you will see all of the attributes in your time dimension related directly to the primary key of your time table. This should be changed to represent the natural hierarchy of a calendar. To add a relationship, simply right-click on the attribute that you want for the left side of the relationship and click “Add New Relationship”. Then, for the right side of the relationship, you will choose the attribute that represents the next subdivision in your calendar hierarchy. For example, if you’re adding a relationship to the Date attribute, then you would want to pick the Month attribute as the right side of the relationship. Once the two attributes are selected, change the relationship type from “flexible” to “rigid”. A rigid relationship is one that never changes throughout your time dimension. For example, by defining a rigid relationship between the “Month” attribute and the “Quarter” attribute we are saying that a month will always be a part of one quarter, not multiple quarters. In other words, January always falls completely under the first quarter of the year, never the second, third, or fourth quarters. A good way to do this would be to start with the “Date –> Month” relationship, followed by the “Month –> Quarter” relationship, followed by the “Quarter –> Semester” relationship, and so on. Once you’ve defined all of your attribute relationships your diagram should look something like the one below.

Attribute Relationships

The final step of creating your time dimension is to create the user-defined hierarchies. Creating these hierarchies is pretty self-explanatory. You simply need to drag and drop your attributes one-by-one in to logical hierarchies. As a rule of thumb you should always follow the same structure that you have in your attribute relationships. These hierarchies will be the source of all your time intelligence on your dashboard so make sure that they have enough levels to give the users complete flexibility when they hook up the hierarchies to time period filters on their dashboard. The image below shows an example of how your time dimension should look after everything is done.

Final Time Dimension

Using the Time Dimension Wizard

The more user-friendly way to create your time dimension is to use the built-in wizard in BIDS. Using the time dimension wizard is a pretty straight-forward process so I will simply go through each of the windows to describe what your options are in each step. To start the wizard, right-click the “Dimensions” folder and click “New Dimension”. In the window that comes up you have two options available to create your time dimension: “Generate a time table in the data source” and “Generate a time table on the server”. Choosing the former will generate the time table in your existing data source, and will also give you the option of generating the underlying table in your database. Choosing the latter will create a time dimension on the same server as your business intelligence project without touching your underlying data source. Once you have chosen a method, you can move on to the next step.

Subdivision Selection

In this step you need to define exactly how much time your time dimension will cover. As I mentioned before it is very important to pick a range that goes well beyond the present date of your data. This will ensure that the time dimension will stay valid for a long time after you’ve deployed your dashboard. On top of this you need to tell the wizard which time subdivisions you would like your time dimension to have. You should always pick ALL of the subdivisions that apply to your corporation’s calendar. But, as a bare minimum, you should pick the Date, Month, and Year subdivisions. Finally, in this step, you can choose the language that your calendar will be in. Once this is done you are ready to move on.

Calendar Selection

In the next step you have the option of choosing some additional calendars to create in your time dimension. Only add additional calendars as needed, otherwise you will simply clutter your table with unnecessary information. Once you’ve selected your additional calendars you can move on to the final step. In this final step you are simply asked to name your time dimension and to review the attributes and hierarchies that will be created. Additionally, if you chose to create your time dimension in the existing data source, check the “Generate schema now” checkbox to automatically add the time table to your backend database. Once everything is finished, you should have a time dimension that looks like the figure below.

Final Generated Time Dimension

As you can see, there are a lot of excess attributes that are created when automating the creation of your time dimension. So while this is not the cleanest way to do things, it is quick and it will work out-of-the-box.

Once you’ve created your time dimension (either manually or through the BIDS wizard), the last thing you need to do is to add the time dimension to your SQL Server Analysis Services cube. After this is done, you are ready to integrate time intelligence to your dashboard!

Useful Links

To learn how to use this time dimension in your PerformancePoint dashboard, please read my previous post:
http://blog.miketangogroup.com/2011/01/04/adding-time-intelligence-to-your-performancepoint-dashboard/

Building SharePoint Web Parts – Basic Handling of Exceptions

I’ve gone through the process of creating custom web parts a few dozen times now, and I thought I’d share an exception handling pattern that I’ve found very useful. I keep re-using this pattern for two reasons:

  1. It is REALLY simple.
  2. It works.

If a web part doesn’t handle an exception, SharePoint will catch the exception and display the dreaded generic fault page. Although there are situations where you might want the whole page to fail, most of the time it is a better user experience if the page remains intact and just the custom web part fails. It is possible to avoid the generic fault page by following the standard practice of wrapping every line of code in a try-catch block. I always wrap small, recoverable sections in their own try-catch blocks and then wrap the entire method in a try-catch that handles unrecoverable errors, like this:

private void myMethod()
{
	try
	{
		if (_sUnrecoverableException != null)
			return;

		//Some Code...

		//A simple recoverable exception
		Sprocket sSprocket1;
		try
		{
			sSprocket1 = wWidget1.getSprocket();
		}
		catch(NullReferenceException nreRecoverable)
		{
			//An exception caught here can be recovered from.
			//This is just a sample, you would never use exception handling to
			//catch an uninitiated variable like this.
			wWidget1 = new Widget();
			sSprocket1 = wWidget1.getSprocket(); //We’ve recovered and can continue
		}

		//Some more code...
	}
	catch(Exception eUnrecoverable)
	{
		//Exceptions caught here cannot be recovered from, the webpart needs to
		//gracefully fail.
		_sUnrecoverableException = “myMethod exception: “ + eUnrecoverable.Message;
		//I find messages of the format “MethodName: Exception text” to be very useful
		//during development.  You would replace this text with a user friendly equivalent
		//when making a production build.
		//Here is also where you would write the full exception to the log as well.  You could
		//use code like the following to do so:
		if (!EventLog.SourceExists("MyWebPart"))
			EventLog.CreateEventSource("MyWebpart", "Application");
		EventLog log = new EventLog();
		log.Source = "MyWebPart";
		log.WriteEntry(eUnrecoverable.ToString(), EventLogEntryType.Error);
	}
}

The reference to _sUnrecoverableException is a global variable. It tracks whether an exception occurred and the message to display to the user. It is necessary to use a global variable because it is not possible to directly pass the exception between webpart lifecycle methods. i.e. If the exception occurred during CreateChildControls() it must be stored until it can be dealt with in the Render() method. It is also a good idea to avoid unnecessary processing by checking _sUnrecoverableException at the beginning of all the lifecycle methods.

To handle the exception, in the Render method just place a switch that either displays the normal web part or the error message. It is important that in the code that displays the error message, there is no opportunity for another exception to occur. If an exception does happen at this point, there is nothing that can be done to gracefully handle it, just let SharePoint deal with it and display the generic fault page. Here is an example of what a simple Render method might look like:

protected override void Render(HtmlTextWriter writer)
{
	if (_sUnrecoverableException != null)
	{
		//Display the user friendly error message.  It will be ugly, but is
		//unlikely to throw an exception.
		this.Controls.Clear();
		this.Controls.Add(new LiteralControl(_sUnrecoverableException));
	}
	base.Render(writer);
}

…and that is all there is to this particular pattern.

For further reading, here are the best links I’ve been able to find about exception handling in SharePoint:

http://www.sharepointsecurity.com/sharepoint/sharepoint-development/webpart-exception-handling-standards/

http://msdn.microsoft.com/en-us/library/ff647598.aspx

Adding Time Intelligence to Your PerformancePoint Dashboard

One of the many requirements of a good BI dashboard is that the user is able to view the BI metrics for a particular time period, or even for a particular day. There are many levels of implementation that are required to getting time intelligence in to PerformancePoint Services. The process goes end to end from the database, to your Analysis Services cube, and finally to the PerformancePoint objects themselves once they are part of a dashboard. In this post I will provide a guide for integrating the time intelligence from your analysis services cube in to your dashboard using Dashboard Designer. I will also provide some tips and tricks that I found along the way.

Adding Time Intelligence to the Data Source

To add time intelligence to your dashboard, the first thing you need to do is to add time intelligence info to the data source that will be using it. To do this you can simply click on the data source, in the data source list, and then click on the “Time” tab at the top. This will bring up all of the configuration options for the time intelligence.

The first option is pretty straight forward; it asks you to pick the dimension that contains the data from the time table. In particular, you need to pick the time hierarchy that you would like to use for the time intelligence in that data source. For example, if your time dimension is named “Date” and you would like to use a fiscal calendar, then you would pick the one that looks like “Date.Date.Fiscal”, “Fiscal” being the name of the hierarchy that represents your fiscal calendar.

The next configuration option asks you to choose a date that starts the year in your calendar, as well as the level that the date is at in the hierarchy, or, in other words, the time subdivision that the date represents. For best results, you should pick an actual date that starts the current year and not some higher subdivision (such as Month, or Quarter). If you pick a higher subdivision you will limit the levels at which you will be able to slice your data. For example, if you pick “January 2010” as the start of your year, then you will never be able to slice by individual days because the “Day” subdivision is one lower in the hierarchy tree. So as a rule of thumb, you should go as deep as your hierarchy allows.

The next thing that you need to configure is the reference date that the date in the previous step refers to. For all intents and purposes, this date should simply be the same date that you chose in the previous step. However, if you picked a time period at a level higher than a specific day then you need to pick the first date in the period that you specified in the previous step. For example, if you picked “January 2010” in the previous step then you should put January 1st, 2010 as your reference date. It is also possible to transpose a period in the past to be represented as the present with these settings. All you need to do is pick a date in the previous setting that represents the same day as your reference date but in a time period in the past. For example, if your reference date is January 1st, 2010 then you could pick January 1st, 2006 to view all of 2006’s data as if it was the present. This actually gets confusing when you incorporate it into your dashboard because a calculation for “today” would actually be todays date but in some period in the past. So unless you’re working with purely legacy data, you should always pick a date in the current time period.

Finally, you need to specify the time member associations in the bottom panel. This step is simply a matter of mapping each level in your time hierarchy to its corresponding time subdivision. If the attributes in your time hierarchy were all named appropriately then this is really just a matter of picking the name in the drop down that is the same (or similar) to each hierarchy level.

Data Source Settings

This screenshot shows an example of what the data source settings should look like once they are complete. As you can see, I've made it so that all of the data from 2006 will be transposed to the current time period (2010).

Creating a Time Filter

The next step to adding time intelligence to your dashboard is to create the actual time filter that will be placed on your dashboard. This time filter will be the controller used to select the desired time period to be displayed.

To create a filter you simply need to right-click the PerformancPoint Content list in Dashboard Designer and go to New –> Filter. From here, pick the Time Intelligence filter and click “Ok”. In the next step you need to add the data source that now has the time intelligence configured. You can add multiple data sources to this list, however you will only be able to use one at any given time. The next step asks you to enter all of the time formulas that you would like to make available to your users. These will be the time periods that the user will have to choose from when they go to your dashboard. Some possible formulas could be “Day” (today), “Day-1” (yesterday), “Month” (this month), “Year-1” (last year), and so on. These formulas will automatically bring your user to the specified time period relative to that day’s date. Once you have all of your formulas created you can click the “Preview” button to see if they were created correctly. All of the values returned in the preview should be correct according to today’s date.

In the final step, the wizard asks you to select the display method that you would like for the time filter. Since this is simply a list of formulas, the “List” option and the “Tree” option will be pretty much the same. However, if you want to be able to compare “Today” to “Yesterday”, for example, you can pick the “Multi-Select Tree” which will allow your user to pick as many periods as they like for a side by side comparison. To get the side-by-side comparison to work properly it will require a little bit more work however, so I will get in to that in the next section.

Filter Formulas

This screenshot shows some sample formulas that you can use for your time filter.

Hooking up the Filter to the Dashboard

Once the time filter is setup there is only one more step to having time intelligence in your dashboard, and that is the process of hooking up the time filter to the rest of the PPS objects on your dashboard. There are two types of objects that can consume your time filter: a scorecard and an analytic chart. Both of these require slightly different setups so I’ll go through each of them individually.

Scorecard

On your dashboard find, or create, a zone to put your filter into and then drag and drop it into that zone. Assuming you already have your scorecard on your dashboard, you are ready to hook it up to the time filter! To do this, simply drag the “Formula” field from the time filter and drop it on to the scorecard’s zone. This will automatically create a connection between the two objects and bring up a connection configuration dialog. In this dialog, under the “Values” tab change the “Connect To” dropdown to “TI Formula”, and click “OK”. This TI Formula is a built-in “hook” for scorecards. It allows you to connect to a time intelligence filter without messing around with the scorecard itself. Once you deploy your dashboard you will be able to use the time filter to view the scorecard stats for a particular time period. The filter works asynchronously so that as soon as you change the value in the filter, the values in the scorecard will adjust to that time period without making the user refresh the page.

Scorecard Connection

This is what the connection settings should look like when you connect your scorecard to your time filter.

If you picked the multi-select tree as your filter type then you need to set up your time intelligence a little differently. In the editor for your scorecard you’ll need to add the time hierarchy that your time filter is using, to one of the axis in your scorecard. For example if you want to compare time periods side-by-side, in columns, then you simply need to drag and drop the time hierarchy above your columns. When it asks you to select the members you would like to use, select “All Periods”. This is because we want the time filter to control the members that are displayed, not the scorecard. In the dashboard editor, drag and drop the “Member Unique Name” field from the time filter on to the scorecard’s zone. In the connection configuration dialog, change the “Connect To” dropdown to “All Periods” under Column Hierarchies. If you previously had something connected to the TI Formula of the scorecard, you will have to delete this connection for the multi-time-period comparison to work.

Analytic Chart

The analytic chart requires a few more steps to setup because it doesn’t have the same TI Formula to connect to. The way to connect time intelligence to your analytic chart is through the “Background” filter of the analytic chart. To add a background filter you need to open your analytic chart in the editor window and then simply drag and drop the time hierarchy that the time filter is using on to the “Background” panel. This “Background” filter is basically your hook in to the WHERE clause in the MDX query. Putting a hierarchy here allows you to attach filters to the data displayed in the chart without changing the two axes. This means that you can transparently filter your data by some “objective” hierarchy that is not explicitly shown in your chart.

Once this is done, you just need to hook up the analytic chart to the time filter on your dashboard. To do this, open your dashboard in the editor and drag and drop the ”Member Unique Name” field on to the analytic chart’s zone. In the connection configuration dialog box change the “Connect To” drop down to the name of the time hierarchy that you put in the “Background” filter. Re-deploy the dashboard and you then have complete control over the time period that you view for the analytic chart.

Chart Connection Settings

This is what your connection settings should look like when you connect your analytic chart to your time filter.

If you selected the multi-select tree as the filter type, then this setup will still work, but you will only see the total of your measure for ALL the time periods that you chose. If you want to see a side by side comparison you simply need to open your analytic chart in the editor and put the time hierarchy under the axis that has your measurement instead of putting it in the background filter. You will then see that the chart will split the results into the chosen time periods.

Sample Dashboards

Here are two screenshots of what your dashboard could look like when you add time intelligence using either the single-select drop down filter, or the multi-select tree filter.

Dashboard Single

Sample dashboard with the single-select drop down filter.

Dashboard Multi

Sample dashboard with the multi-select filter.