8443 Views // 0 Comments // Not Rated

SharePoint 2013 Web Parts And The App Model, Part 4: The Deployment

You've made it to the end of this series on SharePoint 2013 web parts. Part one gave us an overview of the philosophy behind the new app model development paradigm and how our web parts will live Microsoft.SharePoint.dll-less-ly within in it. Part two and part three took us into deep dives of the respective server and client architectures and code. Finally, it's time to talk about deployment.

I've been threatening to write up my super master SharePoint 2013 PowerShell deployment extravaganza and making allusions to it in my last few posts, but it's still not quite ready. Every time the damn thing finally runs clean with no blood in the PowerShell ISE window, something changes or I discover an inefficiency and must to fix it...inevitably drawing more blood. It's not unlike re-breaking a broken foot over and over that just won't set right in its cast (making me a pretty crappy doctor).

However, my deployment bits for web parts has gotten really solid, and I'm now comfortable sharing it with the world. There are two main parts of it: the deployment itself, and the data creator that provisions pages and adds web parts to them. The former is WSP-y stuff we're used to, but I want to dig deeper into the 2013-appropriate-ness of this approach. The latter is implemented entirely with all the new app model, CSOM, and PowerShell stuff that I'm really excited about!

First, the technical deployment of these web parts is, at this point, WSP-driven. I touched on it before, but wanted to start this section by reiterating that my team traveled down this path mainly because it was well-worn: easy and known. With such a large part of the project being writer as an app, we needed something that would just right-click-and-work like we were used to from 2010 projects. Leaning new technologies is one of the most exhilarating aspects of what I do, but it can be simultaneously overwhelming and frustrating when everything's new, and I'm behind, and I know that there's a legacy way out lurking in the corner.

But here's the thing: if we're not supposed to put custom code in the GAC, and WSPs do exactly that, (I don't do bin deployments) are they 2013 SPKosher? In the last few sections, I've wiggled around this by stating and restating that since we're not writing custom SharePoint code, we're good to go. All of our application logic will live in the MVC site, (of our high trust, server-to-server app) so it's really just web part properties and ASCX files that AJAX-ily call services (or our MVC Web API) that will be written to the DLL and GACed.

Although I haven't heard any guidance from Microsoft on this next point, I'm becoming more and more...what's the word...prepared for farm solutions as a whole to be deprecated soon. Given the new paradigm of not having our code running on the front ends, I've started to think about how WSPs affect the farm and could present the same upgrade-precluding ramifications as custom logic. To continue with my medical metaphors, they are sort of like having elective back surgery to strengthen your spine when acupuncture would work just as well.

Recall what I said about timer jobs in the context of the new 2013 development paradigms. The core functionality can be accomplished with the "approved" tools in the app world, but we lose the infrastructure. We could come up with some PowerShell to copy files to various web front ends, and write CSOM to add files to master page and web part galleries. But it's the automated, packaged, schedulable, retractable, scalable platform of WSP deployment that we lose if we indeed need to reinvent this wheel.

And I can't get behind that. But allow me to indulge a bit regardless...

When I compared WSP deployments to elective back surgery, I has trying to hit upon the invasiveness of it. Servers are forced to undergo unpredictable IISRESETs (we can't control the order or exact timing of when a web front end will be bounced). DLLs are forced in the GAC (like I said before, only more curtly, I personally feel like bin deployments require a disproportionate amount coarsening compared to their benefit). Certain unconfigurable naming conventions are enforced (for example "_layouts/15/[name of project]/[whatever files]").

A lot is going on. Of course, in the vast majority of SharePoint projects, none of these are a big deal at all. Maintenance windows are scheduled, file locations are embraced, conventions are respected, and deployments are executed across the farm with just a bit of PowerShell. But they are invasive. What if you're working on a massive public web site that simply cannot go down? Companies have implemented dual production farms, doubling the investment in their server topology, just to avoid the IISRESETs inherit in WSP deployments.

What I'm saying is that I can foresee farm solutions being given the death penalty for the same crimes that custom SharePoint code has been committing for years. I am not at all predicting this; I just won't be nearly as shocked as I was when I learned about having to quit Microsoft.SharePoint.dll cold turkey. It's invasive like surgery, but the recovery time is very minimal; your farm will be back on its feet in seconds.

Alternately, acupuncture is a hipper, less invasive way to work the out-of-the-box kinks from your SharePoint site's spine. Instead of going under the knife and slamming our vertebra with IISRESETs, we can literally pinpoint the different deployment targets of our farm and only swap out those bits, leaving the rest of the environment relatively undisturbed.

With CSOM and REST, we can programmatically provision all of the structure that supports our site. This includes adding assets to galleries, which I feel is the thing we're doing the most in a code driven deployment. With PowerShell, we can spin through the front ends and copy files to them if necessary. I'm not advocating this, since it's way too much infrastructure to rebuild when WSPs have been passed over by the new 2013 development paradigm's angle of death. My only point is that the acupuncture approach could work, and follows the less-invasive manner with which we'll be building SharePoint sites now and in the future.

There are a few other dimensions to this which will keep me using farm solutions until they are wrested away from us for good. The first one is features. How would they work without WSPs? Will there even been features in the future? I don't know, and I don't particularly enjoy burning mental calories contemplating this. But I don't see any way around the crucial deployment role they play. Also, there are these new Design Packages to explore. And what about site collection solutions?

My point is that there's a lot to consider. Given the trend of things in SharePoint 2013 development, it's not insane to be cognizant of the possibility of farm solutions being deprecated. There are alternatives to using them, and it's fun to experiment with them. However, WSPs do so much for us that I cannot fathom their departure without some sort of replacement.

When server code got the axe (remember, we're going through all these paradigm shifts when this object model technically isn't even deprecated yet) we were given a lot of new tools to replace it. Maybe that's what Design Packages will be? Regardless, until we hear more about the future of farm solutions, I will continue to use them in the present.

With that diatribe out of the way, we can get to the web part deployment proper. There are four main components:

  • The WSP (I think we've covered this plenty.)
  • WSP Deployment (I won't go into any detail here because it's nothing new in SharePoint 2013.)
  • Web API Caller
  • Data Creator

The first two we're all set on. Create a new SharePoint 2013 Empty Project in your Visual Studio solution, and add new Visual Web Part items to it. After building and publishing your WSP to your local file system, create your PowerShell deployment scripts that wrap calls to Add-SPSolution and Install-SPSolution. Point these calls to the WSP file(s) published from Visual Studio and you're half way home.

Adding visual web parts to your solution implicitly creates a site collection-scoped feature that pushes them all to the target site collection's gallery upon activation. Other than prettying up the name and description of the feature, we don't need to give it another thought; the solution does all the heavy lifting for us.

Third we have the Web API Caller, which is a PowerShell script that calls either the Get (to retrieve data) or Post (to create data) method on a Web API controller over in our SharePoint MVC app. In the greater context of the deployment proper, a single master script first installs the app and then calls the WSP deployment script (to get the web parts in the gallery). Next it hits up Web API Caller a few times: once to create the site structure (site columns, content types, lists, etc.); once to provision the taxonomy-driven navigation; again to generate pages based off of this new managed metadata; and one more time for the data creator - of which the web part provisioning portion is only a small part. I promise I'll write up the entire process in a future post!

First, let's look at the Web API Caller:

Code Listing 1

  1. #initialization
  2. param(
  3. $timeout = $(72000000),
  4. $url = $(Read-Host -prompt "Web Url"),
  5. $svc = $(Read-Host -prompt "Controller Url"),
  6. $webApiKey = $("965F878B-DECF-4092-907C-F2BA059D4A72");
  7. $path = $(Split-Path -Parent $MyInvocation.MyCommand.Path)
  8. )
  9. #build request
  10. $serviceUrl = $svc + "?hostUrl=" + $url + "&webApiKey=" + $webApiKey;
  11. $request = [System.Net.HttpWebRequest]::Create($serviceUrl);
  12. $request.Method = "POST";
  13. $request.ContentLength = 0;
  14. $request.Timeout = $timeout;
  15. $request.UseDefaultCredentials = $true;
  16. #call service
  17. $response = $request.GetResponse();

As you can see, this is basically a wrapper around an HttpWebRequest that provides all the configuration needed to craft a call to our Web API. The only two parameters that need values are one Line #'s 4 and 5: the base URLs to the target SharePoint site collection and MVC app, respectively. Next, on Line #6, we pass in the Web API key, which I discussed earlier. This is a quick and dirty way to do it for the purposes of this post; normally, I actually pass in the path to a DLL and use refection to pull this key out of a static guid field in a Constants file.

Here's an excerpt from my master script so you can get an idea of how the Web API Caller is used:

Code Listing 2

  1. powershell .\WebAPICaller.ps1 -url http://sharepoint.local -svc http://mvcapp.local/api/structure
  2. powershell .\WebAPICaller.ps1 -url http://sharepoint.local -svc http://mvcapp.local/api/navigation
  3. powershell .\WebAPICaller.ps1 -url http://sharepoint.local -svc http://mvcapp.local/api/pages
  4. powershell .\WebAPICaller.ps1 -url http://sharepoint.local -svc http://mvcapp.local/api/datacreator

I put "powershell" before each Web API Caller invocation because this runs the script in a separate process that's cleaned up immediately. Since my version reflects into a DLL, I don't want my PowerShell script, after it's done, to keep that assembly hanging out in memory. This will cause Visual Studio builds against this project to fail. I've mentioned this in a few older posts but it's always worth repeating: PowerShell loves to sink its teeth into the assemblies it references, so cycle often whichever shell you're using to make sure you're operating with the latest version of your code.

Other than that, all we have to do is feed in the correct URLs and we've got PowerShell calling the CSOM behind our Web API controllers. The SharePoint 2013 app model paradigm is what lead me to conceive this architecture, and I still think it's a bit clunky compared to the much more straight-forward server-object-model-powered feature receivers I was so very much in love with in 2010. However, there are a few hidden bonuses that I was delighted to discover that are actually making life easier in 2013!

The first is rapid deployment. Let's say some of our CSOM changes and we need to redeploy our MVC app to our local server. Since there's no code in the GAC, all we have to do is publish the project directly from Visual Studio. Since I don't ever plan on printing my source code (except maybe to frame it one day) I remapped Control+P to "Publish Selection." Since this does a build and a publish, I just hit this chord and press enter on the dialog that pops up (you have to configure a new publishing profile the first time you do it) and I have a new build out in about five seconds.

This leads to the second benefit: no IISRESETs! After publishing our MVC app, we don't have to suffer through a SharePoint 2013 cold load when we want to test a new build of our Web API. This is insanely awesome for those of us who have spent the better part of our careers waiting for web parts to warm up after a redeployment.


If something changes in our web parts (which could only be tweaking a property in the code or the HTML on the UI) we need to push the WSP. On your local development environment, this is just a right-click-Publish from Visual Studio. It's easy, but we have to then wait for the retraction and deployment as well as the ensuing IISRESET. To combat this, I try to do as much as I can when I create a new web part, and minimize tweaking midstream, opting for larger scale fixes.

But once all the app configuration and SharePoint components are in place, MVC updates - basically our SharePoint site's data access layer - are stupid fast.


The final secret perk is the most obvious: we're not in SharePoint! It is so much nicer living in the MVC world from many different perspectives: debugging, development environment configuration, (it's very hard to fathom the possibility of a VM without SharePoint) local machine specs, (I can build portions of my 2013 apps on my Surface Pro!) the aforementioned benefits of Web API over "SharePoint-enabled" WCF services, and simply not having to deal with the overhead SharePoint requires (Central Administration, Windows services, _layouts\15, and so on; all we need to run our MVC site is IIS).

Finally, let's look at the CSOM in the data creator that adds web parts to pages. This was actually going to be the entirety of this series; somehow I'm just now getting to it at the end of part four. As you approach the edges of CSOM, you get a pretty good idea of what's supported and what's not. Consider the Microsoft.SharePoint.Client.Publishing.PublishingSite class. It has a single method called CreatePageLayout. The fact that, one, this is the entirety of the class, and two, that I can't even find standard MSDN articles to link to, leads me to believe that this isn't just an edge of the CSOM API; it's a jagged cliff overlooking the sea where diving if strictly forbidden.

But I'm of course diving in anyway. My point is that in 2010, if something in the API wasn't intuitive, there always seemed to be a way out in some other method or library. If AvailableContentTypes isn't right, even though it makes sense, just use ContentTypes and move on. But in CSOM, if I want to use some of my favorite goodies from the server OM's PublishingSite class, but find that the client counterpart only has but one method, well, that's a pretty sturdy door locked in my face.

The way to blast through this door is to actually quietly sneak around it. That's the only way we're going to be as productive with CSOM as we were with the server OM. Programmatically adding web parts to pages is a perfect example of this paradigm. We do have LimitedWebPartManager in CSOM, and it even presents the AddWebPart method to plop parts onto pages. Easy right?

Wrong. Dead, dead, dead wrong. And it took me a second to realize why. But when I did, man did it sting. If this CSOM is running in my MVC site, how the hell am I supposed to instantiate web part classes from my SharePoint project to be added to pages? Add a reference from a WSP to an MVC site? Nope. Use some crazy reflection? Nope. Give up? Nope! Let's look at the code and I'll show you how I got it to work.

Code Listing 3

  1. context.ClearWebPartsFromPage(context.Web, "/pages/faq.aspx");
  2. context.AddWebPartToPage(context.Web,
  3. "/pages/faq.aspx",
  4. Constants.WebParts.ContentEditor.Name,
  5. Constants.WebParts.ContentEditor.Title,
  6. Constants.PageLayouts.FullWidth.Main,
  7. new Dictionary<string, object>()
  8. {
  9. { "ContentLink", "/CEWP Files/faq.txt" }
  10. });

The above code listing is in the data creator controller's Post method, which if you recall is invoked via a PowerShell script. In my data creator, for every page that has web parts, I first make the call in Line #1 to clear them out via ClearWebPartsFromPage. Then there's a call to AddWebPartToPage (Line #2) for all web parts to be added. Both of these methods are extensions to ClientContext, which is the type of the "context" variable here. ClearWebPartsFromPage simply opens a page via its relative url with respect to the passed in Web object, and uses a LimitedWebPartManager to remove all web parts. This type of maneuver allows our data creator to be idempotent.

But the far more interesting method is AddWebPartToPage. Like ClearWebPartsFromPage, it takes in the web and relative url to the page, as well as the name and title of the web part, the zone name, (represented here by constants in Line #'s 4 - 6) a dictionary of name/value pairs representing property defaults, and optional parameters for the zone index and chrome. In this particular example, I'm adding a Content Editor Web Part (CEWP) to the FAQ page. I chose this as an example (verses one of our custom web parts) because setting the content of a CEWP via CSOM isn't straightforward and constitutes another good case for creative client workarounds. Here it is:

Code Listing 4

  1. public static WebPartDefinition AddWebPartToPage(this ClientContext context, SPWeb web, string pageUrl, string fileName, string title, string zoneId, Dictionary<string, object> properties, int zoneIndex = 0, ChromeType chromeType = ChromeType.None)
  2. {
  3. //check out page
  4. File page = context.GetFileByUrl(web, pageUrl);
  5. context.CheckOut(page);
  6. //get manager
  7. WebPartDefinition webPart = null;
  8. LimitedWebPartManager mgr = page.GetLimitedWebPartManager(PersonalizationScope.Shared);
  9. //build request to web part file
  10. string url = string.Format("{0}/_catalogs/wp/{1}", context.Url.TrimEnd('/'), fileName);
  11. HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(url);
  12. request.UseDefaultCredentials = true;
  13. //get response
  14. using (IO.StreamReader sr = new IO.StreamReader(request.GetResponse().GetResponseStream()))
  15. {
  16. //parse xml into web part defition
  17. string xml = sr.ReadToEnd();
  18. WebPartDefinition definition = mgr.ImportWebPart(xml);
  19. //add to page
  20. webPart = mgr.AddWebPart(definition.WebPart, zoneId, zoneIndex);
  21. webPart = context.LoadWebPart(mgr, webPart);
  22. //set standard properties
  23. webPart.WebPart.Title = title;
  24. webPart.WebPart.Properties["ChromeType"] = chromeType;
  25. //set extended properties
  26. if (properties != null)
  27. foreach (string property in properties.Keys)
  28. webPart.WebPart.Properties[property] = properties[property];
  29. //save
  30. webPart.SaveWebPartChanges();
  31. }
  32. //publish
  33. context.CheckInPublishApprove(web, page, context.GetPagesList(web));
  34. return webPart;
  35. }

First, Line #'s 4 and 5 get the page and check it out. GetFileByUrl and CheckOut are more extension methods tacked onto ClientContext; their implementation is out scope for this post. But once we have a File object, we can instantiate a LimitedWebPartManager (Line #8). The code to add web parts to pages is very different in CSOM than it is on the server.

Like I said, we can't new up classes that inherit from some web part base, set properties on those objects, and use the web part manager to add them to a page. Instead, we need to call the ImportWebPart method on the manger, pass it some XML, and get back this thing called a WebPartDefinition. This class represents a generic web part, and is actually a pretty nice way to deal with the fact that we can't directly instantiate specific ones. But where do we get the XML from? That's the fun part.

Check out Line #'s 10 - 17. I dynamically build a link to this web part's file in the gallery using the context's url and the web part's file name (which looks like "MSContentEditor.dwp" or "[namespace]_[class name].webpart" without quotes). An HttpWebRequest then pulls down the XML. All we need for this to work is to ensure that our WSP has been deployed so that the file exists in the web part gallery. We use this XML to get a WebPartDefinition on Line #18, and then add it to the page on Line #'s 20 and 21. LoadWebPart is another extension method that simply wraps the CSOM property initialization and ExecuteQuery calls needed to work with this object.

The rest is easy: just set properties and save; it's a lot like working with a ListItem. Finally, we check the page back in and publish it on Line #33. (CheckinPublishApprove will check the library settings and only perform the necessary operations to make the page visible; if versioning is turned off, for example, it will skip the publishing calls.) AddWebPartToPage returns the WebPartDefinition in case the caller (presumably a data creator) needs to perform further configurations on the web part.

The CEWP here is another example of sneaking around the limitations of CSOM. As far as I could tell, there isn't a property that corresponds to the raw content. Either it's not exposed to us, or the CEWP stores its markup outside of the SharePoint web part persistence infrastructure. However, I did find a property, ContentLink, which stores a link to external content (in this case a file named "faq.txt" in a list entitled "CEWP Files"). So that's what I used! Not only does this allow our deployment to remain 100% automated, but it's actually easier for users to go edit this file in the list rather than having to check out the page, get into the CWEP properties, and republish it. Whenever CSOM locks a door, it's up to us to find a window to crawl through to move forward.

That completes this SharePoint 2013 web part series! Here in part four, we talked about the automated deployment of web parts: specifically the code needed to add them to pages. Back in part one, I introduced the paradigm shifts developers will have to power through when leaning SharePoint 2013. Then parts two and three outlined the details of my architecture for 2013 web parts living in the app model. Hopefully these posts provided a solid understanding of how CSOM, our new best friend, will help us code our way through SharePoint 2013! Have fun!

No Tags

No Files

No Thoughts

Your Thoughts?

You need to login with Twitter to share a Thought on this post.