>
Blog
Book
Portfolio

The Script

Now we can jump into the deployment feature's feature receiver and start building the structure of our site. If this code were to be read, line by line, by someone who's not a developer, that individual should be able to roughly describe what the logic is going to do – like driving directions to a place they've never been. There are no heavy conditionals, no service calls, and no polymorphism; just a beautiful, pristine listing of extension method after extension method.

Avoiding Deployment Limbo

An empty feature receiver class is like a blank canvas upon which we will paint our portal. All of the code we are going to write will be in the FeatureActivated method. All Code deployments aren't concerned with the standard install-activate-deactivate-uninstall "lifecycle" of a feature. Now in general, when I'm writing application code, I always make sure to have my FeatureDeactivated method undo everything my FeatureActivated method did.

(I actually use FeatureInstalled and FeatureUninstalled rarely. The only real good example usage I've come across is when you anticipate that your feature will be activated in several sub sites; FeatureInstalled can do some initialization work and FeatureUninstalled can clean things up when the last instance of the feature is deactivated.)

The problem with the feature lifecycle is that if errors are not caught properly in your activated or deactivated methods, we can put out portal in an inconsistent state. And by not caught properly, I mean that although they are indeed caught (and logged), but not re-thrown...or worse: eaten in a catch statement and not logged at all.

Features activate successfully if the event receiver fires, executes, and goes out of scope on its own accord; SharePoint has no idea if your code worked; it only cares that it didn't blow up. So it's better to not trap errors at all instead of eating and logging them proper without re-throwing. Remember that this isn't application logic where we want to fail gracefully; this is deployment logic.

What happens if you don't re-throw is you get stuck in deployment limbo. (This is different than the deployment hell I mentioned earlier. If I were to continue to indulge this metaphor, All Code would clearly be deployment heaven, but I'll spare you.) The following logical fallacy is what lures brave SharePoint souls into limbo:

  1. A feature is coded, deployed, and installed. For this example, all we need to know is that some arbitrary line of code (we'll say it happens to be Line #2) in the FeatureActivated event provisions a content type and Line #3 provisions a list. They are both protected by the same try...catch statement.
  2. The feature is activated.
  3. Line #3 throws an exception that is caught and logged and eaten.
  4. The method goes out of scope.
  5. The feature is activated successfully in SharePoint's eyes.
  6. The developer quickly realizes that something is amiss by checking ULS or the Event Viewer (or whatever) or going to the site and not seeing the list.
  7. The feature is deactivated.
  8. Code in the FeatureDeactivated blows up, since it was trying to delete a list that was never provisioned.

    (Limbo: you are now stuck with a broken feature that can't be deactivated or uninstalled cleanly.)

  9. Grr. FeatureDeactivated is hardened to defend against the possibility that the list could have been deleted. Line #3 in FeatureActivated is also fixed so that the initial exception won't happen. The catch also now re-throws the exception to avoid limbo.
  10. The new code is deployed, and the feature is successfully deactivated.
  11. The feature is activated again. This time it errors out.
  12. It's discovered that Line #2 bombs this time, because the content type already exists from the last false-positive "successful" feature activation.

    (Limbo: now we can't even activate our feature!)

  13. Grr. Fix that too. Redeploy again. Manually delete the content type. Try activation again...

As you can see, it'll take a few iterations to get this right.

Our deployment feature, however, won't have this problem. No one will have to light candles to save our feature from purgatory. This is because there are two things not allowed in an All Code deployment script: try...catch statements and FeatureDeactivated event handlers. Even though these are best practices for standard feature logic, remember once again we are writing deployment code.

And deployment code is special.

Try...catches are bad because like I said in the "Brittle Deployment Scripts" section, we want our feature activation to die a loud and painful death when something goes wrong. Otherwise, we can fall into false-positive limbo with our site. And we'd be lucky if bugs caused by false-positive-actually-failed deployments only make it as far as the demo for the shareholders the day before the site goes live.

Try...catch...bad. As for avoiding FeatureDeactiving events, there're two reasons why I don't bother with this logic like I otherwise would for standard application features. The first is pursuant to what I covered in the "Repeatable" section: since we'll be starting with a clean slate in production, it's faster to blow away your site collection and recreate it over and over again than to not only have to implement the deactivation logic, but also to have to run it every time as well.

We'll look at the PowerShell script that automates this for us later. Other than the fact that it's much faster to sit and watch this script churn fifty times over the course of a project than to take the time to implement the deactivation code, there are certain things that just don't delete well regardless. An example that we've all groaned about is content types. If you provision a content type, then associate it with a list, then delete the list, the content type might still think it's being referenced and refuse to go away.

(Limbo: the site is stuck with "old" content types that won't die.)

I've seen the same behavior with page layouts and instances of pages. There is speculation that this is a bug or that there are workarounds (apparently the "move-it-into-a-folder-and-delete-the-folder" trick from 2007 no longer works), but I'm not interested in that: if certain assets are ornery and won't cleanly delete, then don't fight them. Just bulldoze their house instead.

Now of course, we can be efficient about this. You don't need to blow-away-and-recreate your site collection with every build! Even if your development environment becomes a little unwieldy, consider holding off if it's still stable enough to accurately test your current component. I like to wait until I absolutely need to clean my environment up before taking the thirty-ish seconds to do so. It's sort of like being okay with eating slices of that old pizza in your fridge as long as they haven't cultivated any mold yet.

The other reason I don't implement deactivating logic is simple: why the HELL would anyone unprovision a site? If it's ever decommissioned, the content database is backed up and the site collection is deleted. If our feature activation is like wrapping a present, then implementing deactivation logic would therefore be writing code to untie the bow, peel and scratch off each piece of tape, and meticulously remove and then reroll the wrapping paper onto the spool it come on.

That's so much unneeded effort spent unwrapping when all we had to do was tear the paper off so could fix our script and try wrapping it again! Or even worse: could you imagine some poor admin accidentally deactivating the wrong feature and *poof* the site collection is empty? It's nice that we can defend against that possibility by actually doing less work! We simply can't risk losing data.

So leave the FeatureDeactivating method commented out (like it is when Visual Studio creates the file) if you want. Or uncomment it, and a single line that throws a NotImplementedException in if you want. You can even make the Structure feature hidden so that it can only be activated via script, and alleviate the need to ever even worry about deactivation. (I'll have more to say about hidden deployment features later.)

Create If Not Already There

This paradigm works great for "blue sky" sites when there's no legacy or existing portal to deal with. For situations where the production environment is just a glimmer in IT's eye and we get to build from the ground up, it's the fastest way to deploy a site. However, this isn't always the case. When you're called in to superman a failed site or are spearheading version two or dealing with tight schedules that call for content to be loaded in conjunction with site provisioning, we don't have the luxury of blowing away our site collection every time a new site column is incorrectly deployed with a typo in its description.

Really the only difference is that we have to code around assets that are already there – or should be there. We're sort of living in limbo in these situations. Since we can't reset the site collection (unless we want to wait around for the hour or so it takes to restore a content database that's a few gigs in size or larger) whenever something in the script breaks, any error could leave us in an inconsistent state.

All we have to do is change our code from saying "create this content type or bomb out" to saying "create this content type if it's not already there." You might be asking: why not do this all the time? I've debated this a bunch, but two shortcomings of this method ultimately hold me back. First, it's more complicated logic. And since any error could leave the site in an inconsistent state, it requires more debugging. But secondly, and more importantly, it takes us back to a few of the issues discussed at the beginning of the book.

Consider the situation when you are working on an existing site. The very "over protective" nature of this logic goes against the "brittleness" of deployment scripts mentioned previously. For example, let's say we are removing a list from this site. Unless you're spending a lot calories keeping your team in sync (serving as build master, conducting code reviews, etc.), we'll see the following likely scenario:

  1. One developer adds code to the beginning of the script that deletes the list.
  2. This code is checked in.
  3. Another developer gets the latest and finds that the list remains after deploying to their local environment: it was recreated by the script since the "create if not already there" logic will work as designed unless it's removed!

These types of situations make the architect's job more challenging. Here are a few I've dealt with this year alone:

  • You're building the next version of an existing site and a lot of manual modifications to the information architecture have been made directly in production via the UI.
  • Content authoring begins in the middle of development instead of after. This is something that makes CMS projects special: the site can technically be in use before development completes – and sometimes before it even begins.
  • You inherit a site designed and built by non-SharePoint developers. It is your job to sift through hard-coded SQL queries into the SharePoint database. It is your job to make sense of the one hundred plus page layouts backed by content types containing dozens and dozens of extraneous site columns and why after all of that, everything on the page is ultimately shoved into content editor web parts. It is your job to refactor Rube-Goldberg-like timer jobs that orchestrate workflows, services, and half the SharePoint API into something that requires only five or six lines of PowerShell to work. It is therefore your job to plan out the deployments, scripts, and manual content retrofitting activities to clean up the site and fix the data.
  • The client fires the consulting firm that undercut your company in the bidding process a month before the site is supposed to go live. You inherit the mess. What might cause a company to get fired so close to the deadline of their SharePoint site (which happens to be a public facing site that corresponds to the launching of their new brand, the opening of their new downtown headquarters, and the unveiling of their new name)?

I'll tell you the stories of the worst deployment scenario I've encountered, which, if you could possibly imagine, shows how badly things can go wrong when deployment isn't planned out. Unfortunately, due to legalities and NDAs and all of that, I can't go into specifics about the client or show the "create if not already there" code that got me though these repairs. But the point is to demonstrate how the paradigm of DDD prevails even when a standard All Code deployment isn't the correct choice for a particular situation.

A story of dealing with a SharePoint site created by non-SharePoint personal is one that most can tell after being round the technology for a few years. The thing is, no one is born a SharePoint developer. No one majors in SharePoint administration. It's a thing you get into or a thing you are assigned to, so I try to be empathetic when I take over someone's first SharePoint effort.

But there was no room for empathy after getting acquainted with this code. The aforementioned SQL query wasn't nearly the worst offense. Neither was the hundreds of continuous lines of static HTML glued to web parts' code behind, the egregious memory leaks, or the AJAX atrocities whose description would quickly drive you into therapy.

The biggest offense of SharePoint naivety here was actually the content types I alluded to earlier. By the time we were on the scene, there were thousands of pages of content already created so we couldn't start over. At the same time, the information architecture was so shotty that we couldn't leave it. We're talking about page layouts that vary by a single line of CSS, identical content types associated with every single site column with different names, and a completely inconsistent approach to using taxonomy, lookup lists, and choice columns to model enumerations.

There were also other "codemines" (which is a term I coined to refer to hidden bombs in a solution you inherit). When estimating how much it would cost to repair the site, we couldn't possibly explore every line of code and configuration in every file. So we came up with a contingency bucket that we'd hit when we encountered certain landmines we couldn't account for, such as:

  • The fact that the web app was extended to have one URL as the "authoring" environment and on as the "consuming" site. Seems harmless. But the implementation was code behind the master page reading a Boolean app setting to determine if the ribbon should be hidden. That's right: an entire web app to support a single Boolean. Boo.
  • The root web had over a hundred lists in it to support most of the web part queries. This was saved as a template to provision the sub sites, who consume only one or two of these lists; the rest lay fallow. Boo.
  • There was no discernable data access layer. Methods were copy-and-pasted around the web parts with slight modifications made to each one's CAML. Some controls had code in their load method that called a SharePoint web service and then the pre render event hit the API. There was zero consistency and nothing that resembled a pattern. This made every bug fix its own adventure. Booooooooo.

To fix these, I had to come up with a retrofitting plan that cleaned up all the content, refactored all the web parts, and fixed all the bugs. This was a wild deployment exercise that allowed us to fix the information architecture, put a standard deployment plan into place, and use TFS and its ALM tools to guide our huge team to victory in a matter of weeks.

Since we couldn't start over and couldn't abandon what we had, I implemented a three-phased approach to fix this site. In a TFS branch, one of my senior developers and I used All Code to provision the new structural elements: taxonomy terms, user profile properties, site columns, content types, page layouts, web.config mods; everything. Over in trunk, the rest of the team started building new web parts and fixing some of the more heinous ones, preparing them all to live in our data access layer (which didn't exist yet, since we need the new structure in place first).

In addition to this All Code, we had to do a lot of prep work to pull in the existing taxonomy, site column, and content type IDs. This involved a series of PowerShell commandlets that trolled the site and generated C# DDD Constants classes. Once imported into Visual Studio, I used these classes to feed the Guid or SPContentTypeIds to the "create if not already there" logic to determine if the asset existed.

I've to do tasks like this in a few projects, when there was either no deployment at all and the structure was provisioned manually, or the out of the box Visual Studio / SharePoint integration was used to import them into a WSP (which makes updates tricky). Regardless, I can't stress enough how important it is to be able to uniquely identify your SharePoint assets programmatically.

Second, after this new structure was in place via started DDD PowerShell-driven deployments, we executed a series of content retrofitting scripts (implemented as more PowerShell commandlets) that associated new content types with existing lists, updated the associated content types of certain page layouts, and generated reports of pages that needed content editor web part content migrated into actual page fields. At the same time, we had our content manager use the previous script's output to manually fix the pages that were too screwed up to automate.

We iterated through this phase for a month with weekly deployments in between: pushing new functionality, running scripts to make the existing content use it, and supporting the people manually fixing problem pages. We'd work our way through a set number of page libraries until the retrofitting activities (both automated and manual) required new functionality or existing bug fixes to continue.

Then each week we'd back up production and restore it to test, then restore test into dev, and finally restore dev into our local environments. Then using a TFS build definition and PowerShell remoting, we'd push and deploy our latest code base to dev nightly, and then back up the chain weekly. Each release started as hotfix branch, so new dev could continue uninhibited in trunk.

After the retrofitting, we were able to generate SPMetal off the newly-repaired site and finish off the web parts. All the while, we had our dedicated tester (who was invaluable) trolling the dev and test sites daily, using TFS to do test scripts, assign and resolve bugs, and generate burn down and burn rate reports on our progress. Week after week, our "create if not already there" script grew and grew as we had new assets built and old assets fixed.

Finally, we had the cleanup phase. Once all requirements were satisfied and the content was where it needed to be, one last All Code deployment brought it home: deleting all those unused site columns, extraneous content types, and duplicate page layouts. But it wasn't just deleting crap. Remember when I said DDD didn't concern itself with unprovisioning logic? Well this crazy deployment scheme necessitated bending even this most fundamental rule.

We slipped on rubber gloves and scrubbed this site like it was a flooded basement. Our code methodically deleted lists and page libraries that fell under our definition of "not in use." We unhooked event receivers, removed jobs, lobotomized search settings, took a gas-powered pressure washer to the galleries, and deleted, recycled, and emptied every bin of every unused piece of digital flotsam and jetsam we could find. In a word, it was cathartic.

We left them with a squeaky clean site. Looking back, this client didn't fire their first vendor because the site didn't work. (We only had eleven actual bugs in the issues list!) They were canned because the site wasn't maintainable. Deployments didn't work. Backup/restores didn't work. The content managers basically didn't know how to do things in SharePoint.

So victory was not only doing things the right way on the front end, but demonstrating how to deploy things the right was on the back. When the business asked for something new in our first round of updates, it wasn't hard coded into production or copy-and-pasted in SharePoint Designer. It was built on a branch and tested locally by the developer. It went through our gated check in and continuous integration governance before being automatically deployed to our development environment on Wednesday night. Our tester provided the coverage and remediation documentation before being merged and released to their test environment the next week.

Finally, when the users were happy, it was sent to production. The beauty of all this process is that each and every deployment, from local development machines to automated deploys from the TFS build server to all the packages that reached their environment, used exactly the same script. It was this precision that won this client over and kept them asking for more: all because we designed our deployment strategy first, and fixed everything second.

In these retrofitting/upgrading/saving the day cases, there is a lot to get over. You can code around manual modifications. You can deploy around existing content. You can script around massive content retrofitting necessities. But for me, what's hardest to get over is the fact that I'm not building something scratch; I'm having to deal with someone else's code; someone else's vision; someone else's baby.

So in the following code samples, I'll be following the original paradigm I described in the first few sections. "Create if not already there" certainly has its place, especially when you're not starting your site from scratch. You can even get creative in a content retrofitting scenario, with one structure feature that updates the existing information architecture by creating if not there, and another that adheres to conventional All Code and builds out the new assets. But unless you foresee yourself telling similar stories I just did, stick to All Code.

Order Of Operations

Now of course, you can do whatever you want with this code. I'll provide the methods that provision content, but it's up to you to build them to your specs and call them in the correct order (not that there's a "correct" order inasmuch as there's a "logical order") and organize the code correctly. For example, if you are creating a list in a web, you need to create the web first, and use the resulting SPWeb object to provision the list.

Here's the general "order of operations" for provisioning basic SharePoint objects:

  1. Clean up root web
  2. Create taxonomy
  3. Create site columns
  4. Create content types
  5. Build out site hierarchy (publishing webs will include the "Pages" document library)
  6. Create lists
  7. Create lookup columns (that are based on lists or page libraries)
  8. Update content types with lookup columns

Other optional steps include:

  • Add web parts to pages
  • Clean up lists (default views, default content types, etc.)
  • Site settings (web.config mods, property bag entries, permissions, etc.)
  • Provision web/list templates

In the next few sections, I'll introduce the provisioning code as it's used method by method. Entire listings will be included in the appendix; what's important is gaining familiarity with the basic logic. The details are actually fairly mundane dips into the SharePoint API; the interesting part is how these calls are welded together in an unconventional manner.

Clean up the Root Web

First things first: we need an entry point to all of this goodness. Since this feature is scoped to a site collection, we need to cast the properties.Feature.Parent parameter we get from the feature receiver into an SPSite. Everything we need will come from this object. The first thing that we'll do is grab the root web of the site collection.

(Note: Provisioning code running inside of a feature receiver has slightly different disposal rules than application code executed by a web part. Basically, don't dispose of properties.Feature.Parent; treat this as you would SPContext.Current.Site or SPContext.Current.Web. But anything else flowing from our parent object follows standard SharePoint disposal best practices.)

I like to first clear out any existing sub webs (so that we can be agnostic as to which template the site collection was provisioned with).

Code Listing 4: Structure.EventReceiver.cs

  1. //get site and web
  2. SPSite site = ((SPSite)properties.Feature.Parent);
  3. SPWeb root = site.RootWeb;
  4. //clean web
  5. root.DeleteAllSubWebs();

DeleteAllSubWebs spins through the Webs property of an SPWeb and clears them out. Since we need to be able to run this code against any site template, we are responsible for ensuring that the proper site collection and web based features are activated. We could use out-of-the-box feature activation dependencies for this, but in case we need to enforce a certain order of activation or deal with conditional activation, I like to do it manually (which shouldn't be surprising at this point).

A good example of this is publishing. I've very rarely not needed the SharePoint publishing infrastructure activated (if even merely for branding), so I almost always include it. Notice below how I take advantage of the FeatureIds enumeration that's found in Microsoft.SharePoint.Publishing. There are more such listings of identifiers around the API (some of which we'll see soon):

  • Microsoft.SharePoint.SPBuiltInFieldId
  • Microsoft.SharePoint.Publishing.FieldId
  • Microsoft.SharePoint.Publishing.FeatureIds
  • Microsoft.SharePoint.Publishing.ContentTypeId
  • Micrsoft.Office.Server.UserProfiles.PropertyConstants

Code Listing 5: Structure.EventReceiver.cs

  1. //ensure features
  2. site.EnsureFeature(FeatureIds.OfficePublishingSite);
  3. root.EnsureFeature(FeatureIds.OfficePublishingWeb);

EnsureFeature takes in a Guid, and if the corresponding feature is not activated at that particular scope, it add to the corresponding feature collection (which simply activates it). The feature must of course be installed first, which will happen earlier in the PowerShell script. If the feature is already activated, it does nothing.

Provision the Site Columns

It's hard to get too fired up about site columns, but they are nevertheless an integral part of any data-driven SharePoint application. Since these are shared across our content types, they need to be created first. Also, we'll see here our first usage of the constants file we discussed earlier. To keep us in the good habit of not only not hard-coding strings and guids, but also organizing everything that might be conceptually used as a unique key in one place, we'll be revisiting the constants class constantly (pardon the pun) and adding to it.

And yes, I really mean we'll be hard-coding guids; we already talked about this. Normally, when working with primary keys, (integers with "Identity Specification" in SQL Server) we're used to having the persistence mechanism generate them for us, as they are always and forever read only. But with the emergence of guids, we can bend the rules a little, and "brute force" uniqueness by generating our own ids and shoving them into the database along with the rest of the record.

I remember this blowing my mind the first time I had a CLSA project on my hands. But now-a-days, I've come to consider integer ids to be kind of weenie. Their only real strength is readability, but this cowers in comparison to all the benefits of having an object be sentient enough to know its id upon its inception. This means that, in all of All Code, we are almost never going to be referencing any item from any collection by a string. Since it's a bad call to reference a specific row in a database by anything but its primary key, I want to be following that good practice here with SharePoint.

Following the same pattern of deployment driven design where you prenatally create the code to deploy a particular artifact or module before actually implementing it, your constants class should grow the same way. You'll see me in the following sections first introduce some line of code we'll need to write, and then add a member to the constants class, and then actually write the code that makes reference to the constant.

Back to our example web part: it's going to be querying the Pages library (which is of course a list) of its contextual web, and display its contents with a custom layout. I know that this type of "rollup article" example is tried and true and totally boring, but I want to keep this simple, as it's not the focus here. Let's start by defining the site columns (as well as referencing some default SharePoint publishing columns that we'll need later; you'll see more of this sprinkled throughout the Constants) that will be capturing our pages' metadata in the constants class:

Code Listing 6: Constants.cs

  1. public static class SiteColumns
  2. {
  3. public class Category
  4. {
  5. public const string DisplayName = "Category";
  6. public const string InternalName = "Category";
  7. public static readonly Guid Id = new Guid("2D725093-726A-4A8B-A426-B2BB24EABDA2");
  8. }
  9. public class Abstract
  10. {
  11. public const string DisplayName = "Abstract";
  12. public const string InternalName = "Abstract";
  13. public static readonly Guid Id = new Guid("0AABD74C-6BC6-44A5-9041-B571EE482D6C");
  14. }
  15. public class MainContent
  16. {
  17. public const string DisplayName = "Main Content";
  18. public const string InternalName = "MainContent";
  19. public static readonly Guid Id = new Guid("187051FA-78F5-44C1-9FB5-889FC3461A03");
  20. }
  21. public class ExternalLink
  22. {
  23. public const string DisplayName = "External Link";
  24. public const string InternalName = "ExternalLink";
  25. public static readonly Guid Id = new Guid("1B91245F-3969-43F3-9767-B2FB818D1A7C");
  26. }
  27. public class CategoryLookup
  28. {
  29. public const string DisplayName = "Category Lookup";
  30. public const string InternalName = "CategoryLookup";
  31. public static readonly Guid Id = new Guid("608DC2CD-EEAF-45F2-898D-F2622DB406B8");
  32. }
  33. public class RollupDate
  34. {
  35. public const string DisplayName = "Rollup Date";
  36. public const string InternalName = "RollupDate";
  37. public static readonly Guid Id = new Guid("620D38F1-F59F-41FA-A959-B865098B4D44");
  38. }
  39. public class ThumbnailImage
  40. {
  41. public const string DisplayName = "Thumbnail Image";
  42. public const string InternalName = "ThumbnailImage";
  43. public static readonly Guid Id = new Guid("3A170759-8D1D-4119-B80F-D2B30E44F459");
  44. }
  45. }

As we've seen, the constants class is going to be comprised of smaller classes so that namespaces can emphasize the separation of different types of strings. I've experimented with different approaches that simply prepend something like "SITE_COLUMN_" or "WEB_NAME_" to each constant, but that becomes rather clunky in your IntelliSense. Namespaces are not only cleaner and more elegant, but more forceful than convention, which can be victimized by typos.

Finally, namespaces allow us to use the same canonical representation for different elements that might need the same logical name. For example, let's say we have a sub site named "Category" that we're deploying our site column with the same to. Both our constants for the category sub site and the category site column are called "Category." Instead of having awkward variable names like "Constants.SubSiteCategoryDisplayName" and "Constants.SiteColumnCategoryDisplayName" (which could easily be used incorrectly when you're flying through IntelliSense), we get much nicer usage with "Constants.Webs.Category.DisplayName" and "Constants.SiteColumns.Category.DisplayName." This also allows what would otherwise cause name collisions.

You might have noticed see that all the string members have the "const" modifier rather than being declared "static readonly." The different between the two is rather low level: const variables have their values initialized at compile time, which means they are physically inscribed into the DLL. Members that are static and readonly can be initialized at runtime in a constructor, making them much more flexible. This is only really a concern when it comes to new versions of the assembly containing different values for these members.

Which to use? Since this is provisioning code, this choice is compellingly decided for us by the major benefit of using constants: they "work" with switch statements. This comes up a lot for me: I find myself tempted to be lazy and hard code something after receiving a compilation error that the condition of a switch is not "constant" enough. The only place where we'll be forced to use static readonly variables is for guids, since they require a constructor to be initialized via a string. Fortunately, since guids are guaranteed to be unique across the entire universe, it will never make sense to use one for a switch statement's conditional.

Another temptation I've overcome is to have these classes inherit from a base or implement an interface. We all remember polymorphism from Computer Science 101; it should be tantalizing to refactor all these classes with the same field names into something. However, these classes don't do anything; there's no behavior to abstract. So why have a superfluous base class?

And since interfaces and abstract classes don't lend themselves well to static methods, I don't want to sacrifice the ability to access these values without a class instance just because multiple classes have the same field names. Like I said, my OCD makes it tantalizing to refactor these into something like an "ISiteColumn" interface, but we lose more than we gain. So let's just take our namespace goodness and move on.

Now we can create the site columns:

Code Listing 7: Structure.EventReceiver.cs

  1. //create site columns
  2. HtmlField summary = root.CreateHTMLSiteColumn(
  3. Constants.SiteColumns.Abstract.Id,
  4. Constants.SiteColumns.Abstract.DisplayName,
  5. Constants.SiteColumns.Abstract.InternalName,
  6. Constants.SiteColumns.GroupName,
  7. false);
  8. HtmlField mainContent = root.CreateHTMLSiteColumn(
  9. Constants.SiteColumns.MainContent.Id,
  10. Constants.SiteColumns.MainContent.DisplayName,
  11. Constants.SiteColumns.MainContent.InternalName,
  12. Constants.SiteColumns.GroupName,
  13. false);
  14. LinkField externalLink = root.CreateLinkSiteColumn(
  15. Constants.SiteColumns.ExternalLink.Id,
  16. Constants.SiteColumns.ExternalLink.DisplayName,
  17. Constants.SiteColumns.ExternalLink.InternalName,
  18. Constants.SiteColumns.GroupName,
  19. false);
  20. SPField rollupDate = root.CreateSiteColumn(
  21. Constants.SiteColumns.RollupDate.Id,
  22. Constants.SiteColumns.RollupDate.DisplayName,
  23. Constants.SiteColumns.RollupDate.InternalName,
  24. Constants.SiteColumns.GroupName,
  25. SPFieldType.DateTime,
  26. false);
  27. SPField thumbnail = root.CreateImageSiteColumn (
  28. Constants.SiteColumns.ThumbnailImage.Id,
  29. Constants.SiteColumns.ThumbnailImage.DisplayName,
  30. Constants.SiteColumns.ThumbnailImage.InternalName,
  31. Constants.SiteColumns.GroupName,
  32. false);

CreateSiteColumn creates an SPField in the extended SPWeb, and sets the display name, group, type, and weather the column is required. Considering site administration in terms of organization and user experience while creating or editing content, these are the key fields that need to be populated. Finally, the group name is just a string that allows all the columns provisioned by this feature to be nicely organized in their own group when you view them from site settings.

CreateLinkSiteColumn, CreateImageSiteColumn, and CreateHTMLSiteColumn all are special cases of CreateSiteColumn, setting not only the above-mentioned properties, but also, as noted below, several Publishing-specific ones.

Notice that we're passing a guid id and an internal name to these methods. How do we create an SPField with a given id and internal name? Normally, (and especially with internal names) we are used to these being auto created by SharePoint, and therefore not reliably inferable. Have you ever written an extension method to replace spaces with the cryptically famous SharePoint "_x0020_" encoding? Well, I came up with a way avoid all of that:

There is a method on SPFieldCollection called "AddFieldAsXml." That means you can give the web an XML string representation of a new field, instead of an object reference. Isn't XML a bad word in this book? Yes. But with code, we can slice and dice our prenatal fields, dump out the XML, manipulate it (by forcing the id and internal name) and then shove that whole mess into the collection as a new field.

First, we need to get the internal name. I like to allow this to be passed in by the developer in case there is some sort of special convention going on. However, if there is no drama around this property, we want to default these names to something canonical: which is [site column group name][underscore][column display name with no spaces]. I wrap this up in a string extension method so that given any field's display name, I can get the internal name. First, as stated above, I define a constant to stand for the site column group name.

Since we have separate classes for all of the SharePoint asset types we're modeling, we can use the same variable name for any one that has the concept of a group, like site columns and content types. This way, our convention stays nice and clean with static fields like "Constants.SiteColumns.GroupName" or "Constants.ContentTypes.GroupName" (as we'll see later).

Code Listing 8: Constants.cs

  1. public static class SiteColumns
  2. {
  3. public const string GroupName = "DDD Site Columns";
  4. }

And now the extension method (which can be any convention you like):

Code Listing 9: Utilities.cs

  1. public static string GetInternalName(this string displayName, string groupName = null)
  2. {
  3. //initialization
  4. string staticName = displayName.Replace(" ", string.Empty);
  5. //determine internal name
  6. if (!string.IsNullOrEmpty(groupName))
  7. {
  8. //get internal name as [group name]_[display name with no spaces]
  9. return string.Format("{0}_{1}", groupName, staticName);
  10. }
  11. else
  12. {
  13. //get internal name as [display name with no spaces]
  14. return staticName;
  15. }
  16. }

Next we need a method to attach to SPField that returns the XML representation of it, with the given id and internal name added as attributes:

Code Listing 10: Utilities.cs

  1. public static string GetSchemaForFieldWithIdAndInteralName(this SPField field, Guid id, string internalName)
  2. {
  3. //initialization
  4. XmlDocument doc = new XmlDocument();
  5. XmlAttribute idAttrib = doc.CreateAttribute("ID");
  6. XmlAttribute internalNameAttrib = doc.CreateAttribute("Name");
  7. //set attribute values
  8. idAttrib.Value = id.ToString();
  9. internalNameAttrib.Value = internalName;
  10. //set xml
  11. doc.LoadXml(field.SchemaXml);
  12. doc.FirstChild.Attributes.Prepend(idAttrib);
  13. doc.FirstChild.Attributes.InsertAfter(internalNameAttrib, idAttrib);
  14. //return
  15. return doc.OuterXml;
  16. }

This method news up XmlAttributes for our unique identifiers, loads the field's SchemaXML into an XmlDocument, adds the attributes, and then results the entire XML document. Finally, let's look at how it's used:

Code Listing 11: Utilities.cs

  1. public static SPField CreateSiteColumn(SPWeb web, Guid id, string displayName, string internalName, string groupName, SPFieldType type, bool required)
  2. {
  3. //get internal name
  4. if (string.IsNullOrEmpty(internalName))
  5. internalName = displayName.GetInternalName(groupName);
  6. //create field
  7. SPField field = new SPField(web.Fields, type.ToString(), displayName)
  8. {
  9. Group = groupName,
  10. Required = required
  11. };
  12. //save
  13. web.Fields.AddFieldAsXml(field.GetSchemaForFieldWithIdAndInteralName(id, internalName));
  14. web.Update();
  15. //return
  16. return web.TryGetField(id);
  17. }

What about the other fields we need, like Title, Description, and Category? Title and Description are of course both out-of-the-box, so we don't have to provision our own concept of what a "title" or a "description" is. Whenever I can leverage an existing site column, I do so, and just make sure that it's referenced in my constants class with all the rest. This works great for simple, common types: address, phone number, etc.

But when it comes to the more complex data, like HTML fields or Image fields, I like to create my own from scratch. Looking at the full source of the extension methods I have presented here, you'll see how a lot of properties are set on the field objects themselves. The values I've researched and selected eliminate a lot of weirdness that seems to be caused by programmatically using Publishing columns, such as proper display of the HTML toolbars in various backstage editing screens or non-intuitive default values that need to be set to make the field work at all.

However, assuming all of their settings work for our needs, I think it's a good idea to reuse them to make our site as uncluttered as possible. If you encounter any naming collisions along the way, you'll be greeted with the following error, especially if you accidentally try to provision something with one of the built-in Ids: Getting a naming collision error

Getting a naming collision error

We'll discuss our category column later when we get to lookups. Whenever your data model has parent-child relationships, you need to craft your lists carefully. If the lookup values are simple static strings, you can get away with a choice column. But if you need true referential integrity, then we'll be using lookup columns to associate list items from one list into another.

Although SharePoint lists started getting closer, conceptually at least, to SQL tables (2010 added support for things like basic joins and indexes) that doesn't make it an RDBMS! The power of SQL Server might have spoiled us, with the ability to create two way foreign keys on-the-fly. But with SharePoint, we need to be more explicit. This has some interesting effects on the order in which we provision the components of our site, and we'll get into that in a bit.

Provision the Content Types

Now that we have our site columns created, we can bundle them together into the content types that will define the shape of our data. Just like before, we'll do our constants first and then write the deployment code. As with our site columns above, we'll be hard coding more guids, as well as using the built-in SPContentTypeIds and other constants from the SharePoint API.

I wish we could do this with lists and webs alike, but other than our method of creating SPFields with given ids and internal names, only the SPContentType constructor formally supports this, allowing a means for us to tell our object what its unique id will be (by using a SPContentTypeId object built from welding the parent content type's id to the constant guid passed in). This is especially useful for content types, since it makes SPMetal-generated code portable across environments (both for isolated development sites and for staging/production servers).

Code Listing 12: Constants.cs

  1. public static class ContentTypes
  2. {
  3. public const string GroupName = "DDD Content Types";
  4. public class RollupArticle
  5. {
  6. public static string Name = "Rollup Article";
  7. public static Guid Id = new Guid("B7851C16-669A-4296-880F-04D10E833F11");
  8. }
  9. public class RollupCategory
  10. {
  11. public static string Name = "Rollup Category";
  12. public static Guid Id = new Guid("D1E14D14-8083-4F50-81E6-08B4091234C7");
  13. }
  14. }

And the Structure code that consumes it:

Code Listing 13: Structure.EventReceiver.cs

  1. //create content types
  2. SPContentType rollupArticle = root.CreateContentType(
  3. Constants.ContentTypes.RollupArticle.Id,
  4. Constants.ContentTypes.RollupArticle.Name,
  5. Constants.ContentTypes.GroupName,
  6. ContentTypeId.Page,
  7. Constants.SiteColumns.MainContent.Id,
  8. Constants.SiteColumns.Abstract.Id,
  9. Constants.SiteColumns.ThumbnailImage.Id,
  10. Constants.SiteColumns.ExternalLink.Id,
  11. Constants.SiteColumns.RollupDate.Id);
  12. SPContentType categoryLookup = root.CreateContentType(
  13. Constants.ContentTypes.RollupCategory.Id,
  14. Constants.ContentTypes.RollupCategory.Name,
  15. Constants.ContentTypes.GroupName,
  16. SPBuiltInContentTypeId.Item,
  17. SPBuiltInFieldId.Description);

CreateContentType creates a content type on the extended SPWeb. The parameters are as follows: the guid of the content type, the display name, the group name, the SPContentTypeId of the base content type, and finally a params listing of its site column ids (in order). This method could easily be overloaded to take in object references for the base content type and the site columns (since those will generally be in scope for our provisioning code). But I'm presenting the version that only deals with ids to emphasize the usage of the constants class.

Notice that the provisioning methods we've looked at so far actually return the "SP[whatever]" object that it was responsible for creating. This allows us to set additional properties on them (such as the display format for a date column) without cluttering up the signatures of the methods any more than they intrinsically are.

This also allows us to reference their Ids. So if you prefer, the first call in the code listing above could be rewritten as follows:

Code Listing 14: Structure.EventReceiver.cs (Alternate)

  1. SPContentType rollupArticle = root.CreateContentType(
  2. Constants.ContentTypes.RollupArticle.Id,
  3. Constants.ContentTypes.RollupArticle.Name,
  4. Constants.ContentTypes.GroupName,
  5. ContentTypeId.Page,
  6. mainContent.Id,
  7. summary.Id,
  8. thumbnail.Id,
  9. externalLink.Id,
  10. rollupDate.Id);

This is just a point of style. Like I said, I wanted to demonstrate how the hard-coded ids come in handy in different scenarios. However, using the Id property of an SPField object (or whatever) will certainly be useful if, for example, you're dealing with a dynamic situation when you weren't able to predetermine the field's guid and needed the actual object to get it. Either way, I can't express it enough: get the guids and save the headaches.

Provision the Web Hierarchy

Next we need to build the physical structure of the portal. (Although this is a crucial part of any SharePoint information architecture, note that it is only used here for demonstration purposes; there's no reason our web part example would require separate webs for these particular infrastructural components. But since the Publishing Infrastructure requires a single Pages library per web, I generally create sub webs for each content type that derives from a page.)

Starting with the root web, we programmatically create child and grandchild sub webs, keeping each resultant SPWeb object in scope. Here's another example of the benefits of having provisioning logic return their asset: we have a reference to each site, against which we can execute extension methods to provision more structure specific to that particular location in the site hierarchy. And by keeping references to objects like SPWeb and SPList in scope, we can reference their guids if needed, since we cannot provide them at provision time (and hence keep them in our constants).

Code Listing 15: Constants.cs

  1. public static class Webs
  2. {
  3. public const string Rollup = "Rollup";
  4. public const string Category = "Category";
  5. }

Note above that we're not sub classing each web like we did with site columns and content types. Like the "Features" constants class, when we only care about a single property, (in that case, the guid) we can just list out the names of the webs here. I don't want to go too overkill with the namespaces here; classes with a single member are sloppy. However, like I keep mentioning, DDD is a playground for the OCD programmer inside you. Organize your code all tightly or loosely as you'd like!

Code Listing 16: Structure.EventReceiver.cs

  1. //create site hierarchy
  2. SPWeb categories = root.CreatePublishingSubSite(Constants.Webs.Category);
  3. SPWeb articles = categories.CreatePublishingSubSite(Constants.Webs.Rollup);

CreatePublishingSubSite takes in the display name for a sub web to be created with the blank template and have the publishing feature activated. The description is inferred generically from the name. You might need to add an overload that takes in the URL as well, if the relative URL for the sub site can't be generated cleanly (by way of some HttpUtility.UrlEncode loving and replacing spaces with empty strings) from the display name.

Why not use the publishing template outright for the web? Because using the blank template and activating only the publishing feature dumps less crap into your new site. Win. However, regardless of how, whenever a new publishing site is provisioned, the pages library still has all kinds of garbage in it: content types, columns, and default pages and you'll probably not be using. So the next thing I do is take out the trash:

Code Listing 17: Structure.EventReceiver.cs

  1. //clean page libraries and associate with content types
  2. categories.NeuterPagesLibrary(categoryLookup);
  3. articles.NeuterPagesLibrary(rollupArticle);

NeuterPagesLibrary, after clearing everything mentioned above out, associates the pages library of the extended SPWeb with the provided params listing of content type objects in the order they are specified as parameters to the method. This is huge, as it controls the order they appear when you go to provision new pages from the ribbon. Going the extra mile here will make your user's editing experience much easier.

The full implementation of this method is also a great example of just how deep into the minutia of SharePoint customization All Code can get you.

Code Listing 18: Utilities.cs

  1. public static SPList NeuterPagesLibrary(this SPWeb web, params SPContentType[] contentTypes)
  2. {
  3. //make sure this is a publishing web
  4. if (!web.IsFeatureActivated(FeatureIds.OfficePublishingWeb))
  5. return null;
  6. //set home page to default page (so we can delete /pages/default.aspx)
  7. PublishingWeb pub = PublishingWeb.GetPublishingWeb(web);
  8. pub.DefaultPage = web.GetFile("default.aspx");
  9. //make sure we can use custom layouts
  10. pub.AllowAllPageLayouts(true);
  11. pub.Update();
  12. //get pages list
  13. SPList list = web.GetPagesList();
  14. //delete all pages
  15. for (int n = list.Items.Count - 1; n >= 0; n--)
  16. list.Items.DeleteItemById(list.Items[n].ID);
  17. //misc settings
  18. list.ContentTypesEnabled = true;
  19. list.EnableFolderCreation = true;
  20. //add our content types if they aren't already on the list
  21. List<SPContentType> existingCTs = list.ContentTypes.Cast<SPContentType>().ToList();
  22. contentTypes.ToList().ForEach(ct =>
  23. {
  24. SPContentType existingCT = existingCTs.FirstOrDefault(x => x.Id.Equals(ct.Id) || x.Id.Parent.Equals(ct.Id));
  25. if (existingCT == null)
  26. list.ContentTypes.Add(ct);
  27. });
  28. //clear out of the page content types
  29. list.SoftDeleteContentType(ContentTypeId.Page);
  30. list.SoftDeleteContentType(ContentTypeId.ArticlePage);
  31. list.SoftDeleteContentType(ContentTypeId.WelcomePage);
  32. //update list (and reload it from the database)
  33. list.Update();
  34. list = web.GetPagesList();
  35. //update content type order
  36. List<SPContentType> cts = new List<SPContentType>();
  37. foreach (SPContentType ctRoot in list.RootFolder.ContentTypeOrder)
  38. {
  39. //add our content types first
  40. if (contentTypes.Select(ct => ct.Name).Contains(ctRoot.Name))
  41. cts.Insert(0, ctRoot);
  42. else
  43. cts.Add(ctRoot);
  44. }
  45. //determine if we have any content types
  46. if (cts.Count > 0)
  47. {
  48. //set order
  49. list.RootFolder.UniqueContentTypeOrder = cts;
  50. list.RootFolder.Update();
  51. }
  52. //soft delete extra columns
  53. list.SoftDeleteField(FieldId.ByLine);
  54. list.SoftDeleteField(FieldId.ArticleDate);
  55. list.SoftDeleteField(FieldId.SummaryLinks);
  56. list.SoftDeleteField(FieldId.SummaryLinks2);
  57. list.SoftDeleteField(FieldId.PublishingPageImage);
  58. list.SoftDeleteField(FieldId.PublishingImageCaption);
  59. list.SoftDeleteField(FieldId.PublishingPageContent);
  60. //save
  61. list.Update();
  62. return list;
  63. }

There's a lot to say about this method, including the other extension methods it contains. The details of these, however, are outside the scope of this book. What I really wanted to demonstrate here is how much you can do with just over 60 lines of code; it would take tomes of XML to accomplish this same depth of detail in a site template.

Provision the Lookup List

Lookup lists (and lookup columns) are a bit more complicated than other lists and columns, and their nature affects the order of operations of site provisioning. To demonstrate this, I wanted to add these into our example web part. For each article that we'll be rolling up, there will be a category, which we'll store as a lookup.

Therefore, we'll next need to create a list to store the categories. It should be noted that we could use a choice column, but that's not very exciting: even though the title will serve as the lookup value either way, we might want to store additional metadata for each option, like a description. That's another reason to use lookups over choices: it allows us to model our data more referentially.

To implement this, we create a generic list, associate a content type that describes its schema, and then create a site lookup column for it. First, the Constants:

Code Listing 19: Constants.cs

  1. public static class Lists
  2. {
  3. public const string CategoryLookup = "Category Lookup";
  4. }

And then the Structure:

Code Listing 20: Structure.EventReceiver.cs

  1. //create list
  2. SPList categoryList = root.CreateGenericList(Constants.Lists.CategoryLookup);
  3. categoryList.AssociateWithContentTypes(categoryLookup);
  4. //create lookup site column
  5. SPFieldLookup categoryLookupField = root.CreateLookupSiteColumn(
  6. Constants.SiteColumns.CategoryLookup.Id,
  7. Constants.SiteColumns.CategoryLookup.DisplayName,
  8. Constants.SiteColumns.CategoryLookup.InternalName,
  9. Constants.SiteColumns.GroupName,
  10. categoryList,
  11. false,
  12. false);

CreateGenericList is a simple method that creates a list with the generic template in the extended SPWeb. Next, AssociateWithContentTypes, similar to NeuterPagesLibary, configures a list to have only the passed-in params listing of content types, in order. I think use CreateLookupSiteColumn to create a site column that is configured to act as a one-way foreign key from whichever content type is using the column into the passed in list. The two final Boolean parameters control if the lookup column is required, and if it allows multiple values.

Finally, we now need to tack this site column onto our content type. Of course, we could have done things in a different order, but I wanted to demonstrate one of the coolest features of content type inheritance: updates to a parent content type can be pushed down to their children. Not only does this make upgrade scenarios much easier to plan, but it also helps out in some interesting order of operation paradoxes you can run into with provisioning code.

[Note: if you ever find yourself writing code to dynamically access content types via SPContentTypeId, and you quickly find that code not working, here's a tip: content type inheritance isn't simply happening at design time when you create a content type based off of another. Each time a list gets associated with a content type, it actually gets its own copy of it, loosely linked to its parent. You can tell because its id is one "guid longer" than its parent's. This loose linking allows it to receive updates from its parent, but also allows you to extemporaneously add columns to the list it's associated with.

This is also why SPMetal (which we'll discuss later) creates some weird names for your objects. If you have a content type named "Beer" and a list named "Beers" that is associated with it, you'll actually have two different SPMetal-generated content types: a "Beer" and a "BeersBeer." You can use the "Beer" content type for polymorphic operations, but all of the items in the list will actually be of type "BeersBeer" because the list is creating its own version of the content type for its unique use.]

Code Listing 21: Structure.EventReceiver.cs

  1. //add lookup to content type
  2. rollupArticle.AddColumns(categoryLookupField);

AddColumns extends SPContentType and allows us to add a params collection of site columns to it. When saved, all child content types will be updated with the changes. This method calls upon another, GetFieldLinkIds, which gives us an array of guids that represent the existing columns so that we only add unique ones to the content type.

Like I said, having this method allows us to go slightly out of order when provisioning our site's structure. This is especially useful when you are as anal as I am and stay up half the night refactoring your code. I sometimes break up my more complicated structure feature receiver events into methods like CreateSiteColumns, CreateContentTypes, etc. But this leads to a paradox in weird edge cases, for example, if you need one pages library to lookup another.

So I like logic that reads as follows:

  1. Create site columns
  2. Create content types
  3. Create lists
  4. Create lookups
  5. Associate lookups with content types
  6. Create sub webs

Which is much easier to follow than an order of operations that tries too hard to build all objects of a particular type at once:

  1. Create some site columns (We can't finish the columns until we have the look ups, but those need list references.)
  2. Create some lists (We can't finish the lists until we have all the content types.)
  3. Create lookups
  4. Create the rest of the site columns (We now have the look ups.)
  5. Create content types (We now have all the site columns.)
  6. Create the rest of the lists (We now have the content types.)
  7. Create sub webs

Finally, the above "association" step gets us around a concurrency error with calling SPPersistedObject.Update() on many objects in the same scope. A lot of these extension methods end up calling SPWeb.Update() to persist everything to the database, and then returns a fresh copy of the object to make sure that we avoid these errors. And since this is provisioning code, we can afford the extra database hits. Let's look at the two new utility methods discussed here for a good example of this:

Code Listing 22: Utilities.cs

  1. public static SPContentType AddColumns(this SPContentType ct, params SPField[] fields)
  2. {
  3. //get existing fields
  4. Guid[] existing = ct.GetFieldLinkIds();
  5. //add new fields
  6. foreach (SPField field in fields)
  7. if (!existing.Contains(field.Id))
  8. ct.FieldLinks.Add(new SPFieldLink(field));
  9. //save
  10. ct.Update(true);
  11. //return
  12. return ct.ParentWeb.TryGetStandardContentType(ct.Id);
  13. }
  14. public static Guid[] GetFieldLinkIds(this SPContentType ct)
  15. {
  16. //return
  17. return ct.FieldLinks.Cast<SPFieldLink>().Select(i => i.Id).ToArray();
  18. }

Notice in Line #12, we return the content type from the database, instead of the "ct" variable. Another issue I've run into (and have had trouble reproducing precisely) is when code that programmatically manipulates content types in a Pages library throwing an InvalidOperation exception. It seems to get ornery when we treat the Pages library of a Publishing web too much like a normal list (even though it really is a normal list).

So the bottom line is to keep in mind that we are performing an awful lot of database operations in a single method scope with multiple .Update() calls; order of operations should be strictly based on intuition (i.e., site columns need to come before content types) unless concurrency errors require us to meander our logical path.

[Next]
Loading...