>
Blog
Book
Portfolio
Search

8/28/2011

6102 Views // 0 Comments // Not Rated

Calling Configuration-less Silverlight-enabled WCF Services From SharePoint 2010 Timer Jobs

<Back Story>

Something that I've had to deal with over and over again is timer jobs that grow and grow in scope. I feel like once you go through the diligence of designing, writing, debugging, deploying, configuring, and testing one, it was way too tempting to simply piggy pack it with new functionality than have to go through all of that again.

Of course, if there are two discernable tasks, a job will be written for each without hesitation, pursuant to basic architectural separation of concerns. What I'm talking about is duct taping functionality onto an existing timer job; functionality that, if there was no existing timer job upon which to parasitically live, probably wouldn't be implemented at all.

An example of this is a profile synchronization job I had to build. Basically, we had to dump user data from a SharePoint 2007 environment (which we previously built) into a SQL database to drive a massive SharePoint 2010 / RIA Services / Silverlight app. No sweat. But then the project stakeholders started chitchatting about making this a two-way sync, with data "owned" by the 2010 app needing to be stored back in 2007 (which was the intranet and therefore purveyor of all things profile).

So I scoped out what it would take to build a second job that would query our database for the information that that 2010 app owned, shoot it up to our custom profile service in the 2007 farm, and update the SharePoint profile for each user. Oh yeah, we'd need a feature to create the additional profile properties. Oh yeah, we'd have to crack open our 2007 code and update our business layer to support the new properties. Oh yeah, we'd have to get all this working in dev first.

It was a bit more effort that the client was willing to sign off on as an enhancement. Then they asked the question I was hoping to avoid: "What if we just added this to the existing profile job?" Oh boy. To make a long story short, we were commissioned to commit this architectural sin in order to bring the price tag down into the feasible range.

So I slapped on my tool belt and got to work remodeling my timer job.

</Back Story>

The main challenge with timer jobs that I've found is around having to manipulate SharePoint acontextually. Whenever SPContext.Current is null, you need to be a little bit more careful and creative. However, the fact that the timer job infrastructure provides an SPWebApplication object to work with usually does the trick. A lot of times, I can assume that the first SPSite in the web app's Sites collection is generally what I need to work with.

But that still leaves me in a place where I'm hard coding a URL or assumedly using the first site's RootWeb object. So of course that stuff goes into an SPPropertyBag or the web.config file. The former is a ton easier to work with, but the 2007 timer job that was already implemented used the web.config file. Additionally, the job had to consume the aforementioned profile service, which was written in WCF. Getting that to work in 2007 was quite difficult, but necessary due to the fact that that environment had a lot of Silverlight; conventional .NET 2.0 ASMX web services wouldn't work.

In order to make consuming WCF services in a SharePoint 2007 environment as non-suicidal as possible, I implemented all of the configuration in code. This way, not only did I not have to screw around with anything in the _vti_bin directory, but I could also wrap up all of the configuration code in a service factory and share it across my application.

So I'd like to discuss the combination of these two topics: consuming web.config files from timer jobs, and using that to programmatically spin up WCF service proxies. First, we need to get access to the web.config file. SharePoint timer jobs run under the context of the account running the SharePoint 2010 Timer Windows service (OWSTIMER.exe). Since SharePoint security hurts my head and soul, I see no reason why this can't be a farm-level account, [check sp security best practices] or, at the very least, some account that has read access to the web.config file of the web application of the feature the installed the timer job was scoped to. That was a long sentence; sorry.

I've tried several different ways to open the web.config file programmatically, with varying success. The correct way (or at least the way that feels right to me) is to use the WebConfigurationManager class. Although I've gotten it to work, it seems a bit fickle: it would be fine locally, for example, but after hours of configuration and security checks I couldn't get it to work on the server. And at that point, my clients care more billable hours than I do about elegant solutioning; I abandoned that approach for something more straight forward and pervasive.

The SPWebApplication class has a weirdly-named method called GetIisSettingsWithFallback that returns an awkwardly-named SPIisSettings object chalk full of metadata about the web site and app pool running the application. I use the Path property to get the physical path to the web.config file, as follows:

Code Listing 1

  1. string webDotConfigPath = Path.Combine(app.GetIisSettingsWithFallback(SPUrlZone.Intranet).Path.FullName, "web.config");

In Line #1, "app" is the SPWebApplication object hanging off the timer job. Now that I have the both, I open up the file and shove its contents into an XmlDocument:

Code Listing 2

  1. XmlDocument doc = new XmlDocument();
  2. doc.Load(webDotConfigPath);

Finally, I load the appSettings section into a dictionary:

Code Listing 3

  1. //get app settings
  2. XmlNodeList nodes = doc.SelectNodes("/configuration/appSettings");
  3. if (nodes.Count == 0)
  4. throw new Exception(string.Format("No application settings were found in {0}.", webDotConfigPath));
  5. //get settings children
  6. Dictionary<string, string> settings = new Dictionary<string, string>();
  7. foreach (XmlNode node in nodes[0].ChildNodes)
  8. if (node.Attributes != null)
  9. settings.Add(node.Attributes["key"].Value, node.Attributes["value"].Value);

I know this a rather gritty way of sucking the app settings out of a web.config file, but wrapping the above logic into an SPSecurity. RunWithElevatedPrivileges delegate works every time.

Moving into the second topic, I need four of these settings to invoke a SharePoint WCF service running in SharePoint 2007's web service context: the URL of the service, and the username, password, and domain of an account to impersonate. Using these pieces of data, I can construct my service proxy. For the purposes of this scenario, we used straight Windows authentication.

<Off Topic>

However, when calling these services from Silverlight on the UI, it had to work anonymously on the client, but run in the context of the current user to perform updates (adding items to lists, etc.) on the server. To make this Silverlight communication even possible, I had to inject a clientaccesspolicy.xml file into the Files collection on the root SPWeb, since merely dropping it in the IIS virtual directory didn't seem to work.

Additionally, I had to be crafty with my SPSite objects and elevated contexts within the service methods to get updates to work. Perhaps in today's SharePoint 2010 (which has made the move from ASMX to WCF for its service layer) / Silverlight 4 timeframe things work better, but this was 2007 with SL3; conventional approaches weren't panning out. Here's an example of what the logic looked like inside a WCF service:

Code Listing 4

  1. SPSite site = null;
  2. Guid siteId = SPContext.Current.Site.Id;
  3. SPUser user = SPContext.Current.Web.CurrentUser;
  4. SPSecurity.RunWithElevatedPrivileges(() => { site = new SPSite(siteId); });
  5. using (site)
  6. {
  7. //something that calls .update
  8. }

In Line #4, we had to elevate to instantiate the SPSite object to avoid weird security errors caused by (as far as I can assume) the fact that Silverlight was calling the services anonymously. But the context could only be elevated in that one line, otherwise updates would fail. Also, this ensures that our SPUser object always represented the current user.

</Off Topic>

In order to programmatically configure a WCF service proxy, we need craft a Binding and an EndpointAddress, as well as manipulate the proxy settings themselves. The first step is the binding. Since WCF and I have a rocky relationship, I always start by configuring my proxies very optimistically; long timeouts, large "max" values (object graphs, message sizes, array lengths, etc.). Since it requires a lot of headache to debug issues in your service, (beyond the "Not Found" error) I like to start optimistically and then tweak my settings back into the acceptable range.

For example, a common situation is when service calls that query for data randomly error out. If one particular call happens to bring back a lot of rows relative to the others, that particular response might exceed the (in my opinion, rather small) default message size. So I change it to int.MaxValue. Then, when the application's done, I'll tune that back down, so that it will error out only when the query really is too large, and will affect performance.

So here's my optimistic factory method for building a happy binding (this code is for a BasicHttpBinding using Windows auth):

Code Listing 5

  1. private static BasicHttpBinding GetBinding()
  2. {
  3. //initialization
  4. BasicHttpBinding binding = new BasicHttpBinding(BasicHttpSecurityMode.TransportCredentialOnly);
  5. //configure binding
  6. binding.MaxBufferSize = int.MaxValue;
  7. binding.MaxBufferPoolSize = long.MaxValue;
  8. binding.ReaderQuotas.MaxDepth = int.MaxValue;
  9. binding.MaxReceivedMessageSize = int.MaxValue;
  10. binding.OpenTimeout = TimeSpan.FromMinutes(10);
  11. binding.SendTimeout = TimeSpan.FromMinutes(10);
  12. binding.CloseTimeout = TimeSpan.FromMinutes(10);
  13. binding.ReceiveTimeout = TimeSpan.FromMinutes(10);
  14. binding.ReaderQuotas.MaxArrayLength = int.MaxValue;
  15. binding.ReaderQuotas.MaxBytesPerRead = int.MaxValue;
  16. binding.ReaderQuotas.MaxNameTableCharCount = int.MaxValue;
  17. binding.ReaderQuotas.MaxStringContentLength = int.MaxValue;
  18. binding.Security.Transport.ProxyCredentialType = HttpProxyCredentialType.Ntlm;
  19. binding.Security.Transport.ClientCredentialType = HttpClientCredentialType.Ntlm;
  20. binding.Security.Message.ClientCredentialType = BasicHttpMessageCredentialType.UserName;
  21. //return
  22. return binding;
  23. }

This is pretty straight forward. I basically went through what you'd put in the web.config file, and found the equivalent .NET representation for each setting.

Now here's the code that takes this binding, and builds the proxy from it:

Code Listing 6

  1. public static ServiceSoapClient GetProxy(SPWebApplication app)
  2. {
  3. //get settings
  4. Dictionary<string, string> settings = Utilities.GetConfigurationSettings(app);
  5. if (settings == null || !settings.ContainsKey("ServiceURL") || string.IsNullOrEmpty(settings["ServiceURL"]))
  6. throw new Exception("Could not load configuration settings.");
  7. //create proxy
  8. BasicHttpBinding binding = Utilities.GetBinding();
  9. EndpointAddress address = new EndpointAddress(settings["ServiceURL"]);
  10. ServiceSoapClient svc = new ServiceSoapClient(binding, address);
  11. //update object graph items
  12. foreach (OperationDescription op in svc.ChannelFactory.Endpoint.Contract.Operations)
  13. op.Behaviors.Find<DataContractSerializerOperationBehavior>().MaxItemsInObjectGraph = int.MaxValue;
  14. //authentication
  15. svc.ClientCredentials.Windows.AllowNtlm = true;
  16. svc.ClientCredentials.Windows.ClientCredential.Domain = settings["ServiceDomain"];
  17. svc.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation;
  18. svc.ClientCredentials.Windows.ClientCredential.UserName = settings["UserName"];
  19. svc.ClientCredentials.Windows.ClientCredential.Password = settings["Password"];
  20. //return
  21. return svc;
  22. }

In Line #4, we get our dictionary of application settings from the web.config file of the web application against which the feature that installed our timer job was activated, and proactively ensure that we have what we need from it. Lines #9 - 11 physically build the service proxy object. The next two stanzas manipulate settings on the proxy itself, beyond what you can set with the binding.

And now you have a static method that gives you fully configured WCF service proxies without any concern for configuration files. I really like this approach, since it simultaneously reduces the complexity of deployment and increases the portability of your service tier. Whenever you add a new service reference to your UI, instead of a messy copy-and-paste of your client-side WCF configuration XML, you only need to create a new static method to get a strongly-typed reference to the new proxy object. And if something major happens server side, (like when this particular client changed from NTML to Kerberos) you only need to update a few lines of code rather than a bunch of XML in a bunch of files. Win.

So that's the story of calling WCF services from SharePoint timer jobs. Whenever you have intense SharePoint logic, (and by "intense" I mean requiring of very high privileges and/or operations that SharePoint won't allow in the context of an HTTP Post) consider wrapping them up as WCF services (if you're willing to deal with the extra infrastructure). This is especially helpful if you're coding against service applications where the SharePoint API will slap you for calling it from anywhere near an application page or a web part.

That way, your logic is sitting up on the server in a little black box, and you can consume it from a timer job, web part, feature receiver, client application, or really anywhere else within your portal. Have fun!

No Tags

No Files

No Thoughts

Your Thoughts?

You need to login with Twitter to share a Thought on this post.


Loading...