>
Blog
Book
Portfolio
Search

4/25/2010

12768 Views // 0 Comments // Not Rated

A Nifty SharePoint Utility to Get Web, List, and List Item Ids from a URL

I love passing the current SharePoint page's URL around, and using that information to get access the underlying SharePoint structure and supporting data objects. The most common scenario is this: you give me a page's URL, and I'll give you the associated SPListItem- without the overhead of using the SharePoint Publishing API. Or give me URL of an SPFolder that's nested beneath several other layers of folders, and I'll return the containing SPList. In any case, the current page's URL (or, any page's URL, for that matter) can give us a great deal of context.

Incidentally, the "U" in "URL" stands for universal. What this means is that, ideally, whichever resource is pointed to by a URL can always be depended upon to remain at that location. Indeed, a URL can be thought of as an identifier. This paradigm is what gave rise to REST-based technologies. Of course, the page can be deleted, the server be defenestrated, or the Internet can simply explode, leaving you with broken URLs, among other problems.

However, as far as dealing with problems that are within our control go, especially in SharePoint where URLs are very meaningful, I am comfortable treating URLs as Ids. Internally, will I always use the guid UniqueID (for an SPListItem) or ID (for an SPSite, SPWeb, or SPList) properties while filtering, slicing, and dicing my way though the SharePoint API? You betcha.

Will I EVER match on titles, names, or other string properties? Well, yeah, sometimes you just have to, but I always wash my hands immediately after doing so.

But as far as URLs go, we have a bit of a contradiction. I just said that they are valid identifiers. Then I said that I don't like matching on strings. What gives? Is a URL (especially when used to construct a Uri object) behaving like an Id or a string? And if it's a string, do we need to therefore worry about other string matching challenges, such as casing, whitespace trimming, URL special character encoding, etc?

Well, it depends on what you do with it. If you take the current request's URL and start doing substrings and character indexes on it, well, you're going to get yourself into trouble. But you are also asking for it if you iterate entire sub sites looking for an SPListItem that matches the given URL - this time in terms of expensive API calls rather than sketchy string manipulation.

At first, I went with the cleanest approach possible: new up a Uri with the URL, and use the properties on the this object to discern the current SharePoint site, web, list, and item. However, like I said, SharePoint URLs are very meaningful, in as much as they always follow the same pattern:

http[s]://site/web/[sub web... /...sub web/]list[/folder... /...folder]/page.aspx

Where the Uri idea fails is when we have (as denoted [poorly] above with brackets) nested webs and folders. I haven't dug through Microsoft.SharePoint.Utilities to see if there's something in there that does what I'm about to show you, but obviously the Uri object can't be expected to understand SharePoint site hierarchies.

So I wrote the algorithm myself as an extension method on a string. (Extending Uri might have been cleaner, but then I'm always instantiating these objects with the current page's URL just to turn around and call ToString on them.) You feed it four Guids: the site id, and three out ids to store the URL's web, list, and item. (I didn't implement a version of this to give us folder Ids, simply because I never needed them.)

And here we go:

Code Listing 1

  1. public static void GetIdsFromURL(this string itemURL, Guid siteId, out Guid webId, out Guid listId, out Guid itemId)
  2. {
  3. //initialization
  4. SPWeb web = null;
  5. webId = Guid.Empty;
  6. listId = Guid.Empty;
  7. itemId = Guid.Empty;
  8. string originalURL = itemURL;
  9. string listName = string.Empty;
  10. //impersonation
  11. SPSite site = null;
  12. SPSecurity.RunWithElevatedPrivileges(() => { site = new SPSite(siteId); });
  13. //make sure we have an absolute url
  14. if (!itemURL.StartsWith(site.Url))
  15. {
  16. //if it doesnt start with the current site collection's url, make sure to build the absolute url as [site url]/[item url]
  17. if (itemURL.StartsWith("/"))
  18. itemURL = string.Concat(site.Url, itemURL);
  19. else
  20. itemURL = string.Concat(site.Url, "/", itemURL);
  21. }
  22. //keep parsing out url until we get a valid web
  23. while (itemURL.IndexOf("/") != -1)
  24. {
  25. //get this web by url
  26. web = site.AllWebs.Where(w => w.Url.Equals(itemURL, StringComparison.InvariantCultureIgnoreCase)).FirstOrDefault();
  27. if (web != null)
  28. {
  29. //a web was found - done
  30. webId = web.ID;
  31. break;
  32. }
  33. else
  34. {
  35. //a web was not found - iterate: listName is the part that will be silced off; itemurl is the new web url to try
  36. listName = itemURL.Substring(itemURL.LastIndexOf("/"));
  37. itemURL = itemURL.Substring(0, itemURL.LastIndexOf("/"));
  38. }
  39. }
  40. //get list from web
  41. if (web != null)
  42. {
  43. //make sure list name is not blank - this will be the case for webs
  44. listName = listName.Replace("/", string.Empty);
  45. if (!string.IsNullOrEmpty(listName))
  46. {
  47. //fix "Lists" list name
  48. if (listName.Equals("Lists", StringComparison.InvariantCultureIgnoreCase))
  49. {
  50. //the name of the list will be the first /xxx/ after the web url
  51. listName = originalURL.Replace(string.Format("{0}/Lists/", web.Url), string.Empty);
  52. if (listName.Contains("/"))
  53. listName = listName.Substring(0, listName.IndexOf("/"));
  54. }
  55. }
  56. //get list
  57. SPList list = Utils.GetList(web, originalURL, listName);
  58. if (list != null)
  59. {
  60. //get item
  61. listId = list.ID;
  62. SPQuery query = new SPQuery();
  63. //build query
  64. query.ViewFields = Utils.GetQueryViewFields(list, "Name");
  65. query.Query = Utils.GetNameWhereClause(HttpUtility.UrlDecode(originalURL.Substring(originalURL.LastIndexOf("/") + 1)));
  66. //run query
  67. itemId = Utils.GetItemIdFromFolder(list, list.RootFolder, query);
  68. }
  69. }
  70. }

There are some things to point out:

  • Line 12: The reason I only wrap the instantiation of the SPSite object in the SPSecurity.RunWithElevatedPrivileges delegate (and not the entire method) is to ensure that the impersonation will work when this code is referenced in a SharePoint web service called from a Silverlight client. Since Silverlight calls are anonymous and SharePoint is NTLM, the WCF magic connecting the two can't quite get the context correct when running inside the delegate; any call to SPPersistedObject.Update will fail with a security error. So, when you're in this situation, make sure to grab what you need from SPContext first, then elevate to new up your SPSite. This will ensure your code will run and update in an elevated context, despite the call being ultimately anonymous.
  • Line 26: SPWebCollection does not support Linq, so I found come cool extension methods to make this iteration easier here.
  • Lines 57, 64, and 65: These are simple helper methods I created - nothing fancy.
  • Line 67: This method finishes the job by "smartly" parsing out the URL to get the SPListItem's guid. Here is that method:

Code Listing 2

  1. public static Guid GetItemIdFromFolder(SPList list, SPFolder parent, SPQuery query)
  2. {
  3. //initialization
  4. Guid id = Guid.Empty;
  5. query.Folder = parent;
  6. //run query
  7. SPListItemCollection items = list.GetItems(query);
  8. //query items in this folder
  9. if (items.Count == 1)
  10. id = items[0].UniqueId;
  11. else if (items.Count > 1)
  12. id = Guid.Empty;
  13. else
  14. {
  15. //recurse
  16. foreach (SPFolder folder in parent.SubFolders)
  17. {
  18. //check each result
  19. Guid itemId = Utils.GetItemIdFromFolder(list, folder, query);
  20. if (!itemId.Equals(Guid.Empty))
  21. {
  22. //done
  23. id = itemId;
  24. break;
  25. }
  26. }
  27. }
  28. //return
  29. return id;
  30. }

Why go through all this extra lifting? Simple: anything to avoid for-eaching through list items. Notice Line 5. I pass the same SPQuery object to reach recursive depth and update the folder property to the current folder. This way, my "query" is only at the current depth, and I avoid expensive iterating and kludgy string matching.

So next time you find yourself mindlessly searching your site for a given URL to, for example, pull the metadata for a page, consider using this method. Also, like I said, whenever you start performing string manipulation operations on a URL, something inside you should feel wrong; it should be like the feeling you have on the way to a party when you suddenly break into a cold sweat and wonder if you left your oven on.

No Tags

No Files

No Thoughts

Your Thoughts?

You need to login with Twitter to share a Thought on this post.


Loading...