8170 Views // 0 Comments // Not Rated

The SUESS Lifecycle: Stage 1 - Upload

The SUESS Series


Stage 1 - Upload

Stage 2 - Encode

Stage 3 - SmoothStreaming

Our in-depth media adventure begins, well, at the beginning of the SUESS "lifecycle" with Uploading. This stage contains two sub-components: the Uploader Silverlight control, and the WCF service on the Web Server that facilitates the file upload process. The only prerequisite required here is to have your data tier (Component 3) in place (which can simply be a folder on the Web Server).

Of course, there is nothing new about a file upload control; it's been in HTML longer than I have. Over the years, I've seen a lot of modifications to the familiar read only textbox and "Browse..." button to get the effects people are starting to expect in this whole Web 2.0 craze: progress bars, client-side file type filtering, large file support, etc.

To achieve these nice features, a lot of magic needs to happen client side. If you've read anything I've written before, you'll know that I don't consider AJAX to be magic; it's voodoo at best compared to the wizardry of Silverlight. The argument remains the Silverlight needs to be installed on the client's machine while AJAX comes down for free from the server along with the rest of the page. However, when it comes to programmatic benefits such as a first-class development experience (IntelliSense, compilation, UI designer, etc.), robustness of code, and extent of possibilities of what you can build, you cannot argue against Silverlight.


Now don't get me wrong: AJAX is a really cute technology that has a lot of niche uses. The difference, as far as I'm concerned, falls in the mindset of the developer. Take animating progress bars for example. In AJAX/HTML, you'd need to arrange a few nested divs, set all their widths and background colors, and then when the upload mechanism reports progress, update the width of the inner-most div. To me, that's not really a true progress bar; it's HTML kludginess.

Now, in Silverlight, you see, you literally just animate a ProgressBar.

THAT'S the point.


The example Uploader control I'll show you shortly lacks all the bells and whistles with which it could be adorned to truly make the life of a content manager much better. These features include drag and drop support, multi-file uploads, a slick UI, etc. However, the big ones, like progress bars and cancellation, will demonstrate how uploading large files (such as media) can be a pleasant experience for a user.

First of all, what does it look like? Well, not much, unfortunately. A pretty admin UI was not a requirement of the project through which I came up with SUESS, since this was all back end functionality. But here's what we have:


The title and description fields are watermarked textboxes with some gradient borders to implement the comps of the project. I really don't want to bore you with the details of the code that updates the visual state of the UI; it's hard to make hiding buttons and displaying error messages sexy. You'll be able to download SUESS in its entirety and comb through all the details yourself. Instead, let's take a quick trip through it, and then dive into the cool stuff.

The only input we actually need from the user is the file itself. Depending on your data tier, you could accept more metadata. (SUESS was actually born into SharePoint, so the sky was the limit for metadata.) But for now, we'll just have two optional textboxes for the name of the media and a description. If the name is blank, we'll use the file name. (As you'll see later, with the way Encoder outputs its SmoothStreaming formatted files, we don't have to worry about filename collisions; it's only a consideration during uploading.)

Next we have the "Encoding Quality" drop down. Once again, this is optional, and included only as an example to show another aspect of what you can do with SUESS in terms of explicitly controlling a myriad of Encoder options. (You'll see what I'm talking about in the Encoding post about SUESS.) These are hard-coded (barf, I know) values mimicking the Video Complexity (no documentation available) enumeration in the Encoder SDK. This is the best we can do to this extent, since we can't actually reference the DLL in our Silverlight project and reflect each enum member.

The purpose of including this in the example Uploader is to allow the user the ability to "throttle" the encoding process, which, as previously mentioned, could be very processor (and, of course, time) intensive. Unfortunately, I don't have a lot of metrics regarding just how much quality is sacrificed in terms of time taken to encode. However, I can say that media encoded with the highest quality settings indeed takes much longer than with the lowest.

Finally, we have the media itself. Silverlight gives us the OpenFileDialog control that is more or less identical to what you'd find in Windows Forms. The one immediate advantage it gives us over what is available in the HTML version is the ability to filter the file types (by extension) that the user is allowed to select in the dialog. This not only gives us a more elegant user experience (since we can "label" our filter with something like "Media Files,") but also makes the development much easier, since we don't have to go through the "hand slapping" input validation experience.


Back in the HTML/AJAX version of a file upload control, if you need to enforce a certain type (or types) of file(s), the best you can do is inspect the file the user selected, check its extension, and display an error message, forcing the user to have to go through the entire exercise again. Instead, with Silverlight, we can simply hide invalid file extensions in the folders they navigate within the dialog via the Filter property. This probably doesn't seem like a big deal, but I think it's huge: small UX improvements like these are what Web 2.0 is all about; with Silverlight, you are really using an application, not just a webpage.


Let's talk about a few of the more interesting code samples. The first one simply sets the OpenFileDialog to only allow the files that Encoder supports to be uploaded. It also has a hard coded value (which can easily be made to be configurable) to disallow files over 1GB (purely for sanity purposes).

Code Listing 1

  1. private void btnSelectFile_Click(object sender, RoutedEventArgs e)
  2. {
  3. //initialization
  4. OpenFileDialog ofd = new OpenFileDialog();
  5. //show open file dialog
  6. ofd.Multiselect = false;
  7. ofd.Filter = "Media Files|*.asf;*.avi;*.bmp;*.gif;*.jpg;*.jpeg;*.m2t;*.m2ts;*.mov;*.mp4;*.mpeg;*.mpg;*.png;*.tif;*.tiff;*.ts;*.vob;*.wmv;";
  8. bool? fileSelected = ofd.ShowDialog();
  9. //check for file
  10. if (fileSelected.HasValue && fileSelected.Value)
  11. {
  12. //check legth
  13. if (ofd.File.Length > this._maxFileSize)
  14. {
  15. //(omitted UI code)
  16. //file too large
  17. this.tbProgress.Text = "You cannot upload a file larger than 500 MB.";
  18. return;
  19. }
  20. //(omitted UI code)
  21. //start upload
  22. this.StartUpload(ofd.File);
  23. }
  24. }

Line #7 above sets the Filter property of the OpenFileDialog to the Encoder 3 supported file types. This is what ensures that Encoder will be able to handle whatever the user throws at it. Line #15 shows that "UI" code has been omitted. You'll see this in a lot the code samples throughout SUESS. Like I said, I don't want to bore you with the details of dealing with VisualStates in the application. Here are some more action shots:




Next let's look at the logic that implements the recursive "chunky" upload. If we send the entire file up to the server, we can't show progress bars, and really don't take advantage of client side functionality at all. Instead, the SUESS Uploader sends 1MB of the file up to the server at a time. After each call, the UI is updated with the current progress, and then the next "chunk" is uploaded.

Since all Silverlight service communication is done asynchronously, we need to daisy chain the "Completed" callback for each of these calls so that they are executed serially within it. Normally, if service calls can be done in parallel (such as downloading images or assembling the content of unselected tabs), you can kick off them all off at the same time; when they're done, they're done, and the corresponding part of the UI lights up.

Otherwise, if you need to call service A and then pass its result on to service B (or a subsequent call back to A, like the Uploader), then there will be some chaining going on. Here's a quick diagram that demonstrates these two paradigms.


In the top half, where we show "parallel" calls, all of the service references are explicit. We invoke a service, and when the associated completed event handler is fired, we do something back on the UI. What if we need to call the same service multiple (and in an unknowable amount of) times in a specific order? This is the case with the Uploader. Since we obviously can't know the size of the file, then we further don't know how many chunks we'll be dealing with. We need to make these calls serially.

The answer is recursion, as shown in the bottom half of the diagram. (Of course, this is only logical recursion, since the same physical method isn't actually calling itself.) I experimented with some crazy iterative algorithms, but they got real messy real quick, and were all ultimately besmirched by the asynchronousity of Silverlight. Instead, I created a method that uploads a chunk (array of bytes) of a file, and when it's done, increments a counter and progress bars. Finally, the same service is called again with the next chunk.

This algorithm, other than being sweet at uploading files, makes two additional features of the Uploader trivial: real time progress bars and upload cancellation. We'll discuss those next. First, however, let's look at this code. This piece is the StartUpload method that the above code block calls to kick off the recursion.

Code Listing 2

  1. private void StartUpload(FileInfo file)
  2. {
  3. //(omitted UI code that resets progress bars, clears errors, etc.)
  4. //start recursive upload
  5. this._index = 0;
  6. this._fileName = string.Format("{0}-{1}", Guid.NewGuid(), file.Name);
  7. this.UploadFile(file, true);
  8. }

The Boolean passed as the second parameter to UploadFile in Line #7 simply tells the algorithm this is the first chunk. UploadFile actually doesn't care; it just passes this along to the service so that it knows whether to create a new file on the data tier or open an existing one. This can probably be inferred on the server rather than made explicit, but I didn't want to have to burn an extra trip to the disk for each chunk if I didn't have to.

Here's the algorithm:

Code Listing 3

  1. private void UploadFile(FileInfo file, bool isFirstChunk)
  2. {
  3. //initialization
  4. byte[] buffer = new byte[this._bufferSize];
  5. MediaServiceSoapClient svc = Utilities.GetMediaClient();
  6. //get chunk
  7. using (Stream data = file.OpenRead())
  8. {
  9. //write to buffer
  10. data.Seek(this._index, SeekOrigin.Begin);
  11. data.Read(buffer, 0, this._bufferSize);
  12. }
  13. //upload chunk
  14. svc.UploadFileChunkAsync(this._fileName, buffer, isFirstChunk);
  15. svc.UploadFileChunkCompleted += (sender, args) =>
  16. {
  17. //determine result
  18. if (args.Cancelled || args.Error != null || !string.IsNullOrEmpty(args.Result))
  19. {
  20. //(error code very much truncated here; the above if statement should check each OR'ed condition separately and update the UI appropriately)
  21. }
  22. else if (this._index > -1)
  23. {
  24. //update text progress
  25. double progress = (Convert.ToDouble(this._index) / Convert.ToDouble(file.Length)) * 100;
  26. this.pbUpload.IsIndeterminate = false;
  27. this.tbProgress.Text = string.Format("{0} / {1} MB uploaded...", Convert.ToDouble(this._index) / Convert.ToDouble(this._bufferSize), this.GetTotalSize(file.Length));
  28. //animate value
  29. Storyboard sb = new Storyboard();
  30. DoubleAnimation da = new DoubleAnimation();
  31. Storyboard.SetTarget(da, this.pbUpload);
  32. Storyboard.SetTargetProperty(da, new PropertyPath("Value"));
  33. sb.Children.Add(da);
  34. da.To = progress;
  35. //determine if we've reached the end of the file
  36. if (file.Length > this._index)
  37. {
  38. //pause and upload next chunk
  39. Thread.Sleep(200);
  40. this._index += this._bufferSize;
  41. //recurse
  42. this.UploadFile(file, false);
  43. }
  44. else
  45. {
  46. //(omitted UI code)
  47. //we're done - start encoding
  48. this.EncodeCompletedFile();
  49. }
  50. //animate progress
  51. sb.Begin();
  52. }
  53. else
  54. {
  55. //update canceled
  56. this.ShowCancel();
  57. return;
  58. }
  59. };
  60. }

The first thing to point out here is the utility method in Line #5. This method is a nice Silverlight helper that alleviates the need to have to deal with the "ServiceReferences.ClientConfig" files that store the Service URLs and other WCF settings. Using this method allows you to promote your control from development through production without having to worry about maintaining it.

Code Listing 4

  1. public static MediaServiceSoapClient GetMediaClient()
  2. {
  3. //return
  4. return new MediaServiceSoapClient(Utilities.GetBinding(), Utilities.GetAddress("Media"));
  5. }

Here are the two internal methods that build the WCF binding and get the URL of the service dynamically:

Code Listing 5

  1. private static BasicHttpBinding GetBinding()
  2. {
  3. //initialization
  4. BasicHttpBinding binding = new BasicHttpBinding(BasicHttpSecurityMode.None);
  5. //set timeout
  6. binding.SendTimeout = TimeSpan.FromHours(1);
  7. binding.OpenTimeout = TimeSpan.FromHours(1);
  8. binding.CloseTimeout = TimeSpan.FromHours(1);
  9. binding.ReceiveTimeout = TimeSpan.FromHours(1);
  10. //set messags sizes
  11. binding.MaxBufferSize = int.MaxValue;
  12. binding.MaxReceivedMessageSize = int.MaxValue;
  13. //return
  14. return binding;
  15. }

And the piece to get the service endpoint. (Note that since this example was lifted from a SharePoint implementation of SUESS, I had to be a bit creative with the URL.)

Code Listing 6

  1. private static EndpointAddress GetAddress(string name)
  2. {
  3. //window.location.href
  4. Uri uri = new Uri(((ScriptObject)HtmlPage.Window.GetProperty("location")).GetProperty("href").ToString());
  5. //return
  6. return new EndpointAddress(string.Format("{0}://{1}/_vti_bin/{2}Service.asmx", uri.Scheme, uri.Host, name));
  7. }

The rest of the main algorithm is actually pretty straight forward. The recursiveness happens in lines 39 through 42. We pause a bit in Line #39 so that Silverlight doesn't try to open the file from a new thread before the previous one properly closed it, increment the index of where we "are" in the file, and then recursively call the service.

How do we break out of the recursion? Three things can happen. First of all, if something goes terribly wrong on the server or some other exception is thrown, it will be caught on Line #20, handled, and then we'll hard return out of the method. (Again, since this isn't physical recursion, we don't have any stack trace "depth" to worry about.) Otherwise, we use the index. If it's greater than or equal to the length of the file, we know we've uploaded all the bytes: break out here, and start encoding.

The final way is through cancelation, which is one of the big features the Uploader. I know this isn't anything amazing, but it's another example of something that's pretty easy in Silverlight and probably pretty tough in HTML. As someone who's been in and around SharePoint for years, I've seen a lot of large file uploads quietly timeout after watching the page spin for ten minutes. That sort of behavior is not good enough; we need a big red self-destruct button to make sure we can cleanly stop an upload.

So as all this asynchronous uploading is happening on background threads, how do we cancel it from the click event of a cancel button on the UI thread? It turns out that it just works. Silverlight will automatically fire the completed event for a service call on the correct thread, alleviating the need for any dispatching. Since we don't have to worry about any cross threading, we can jump right in with the logic.

The Uploader cancel button click does two things. First, it simply sets the aforementioned index to -1, which basically throws a wrench in the recursive gears. Since all of our calls are on the same thread, we can check this index, see that we've been cancelled, and, well, stop making calls. This is all the housekeeping we need on the client. But what about the server?

The second cancellation task is to make one more call that tells the server that this upload has been cancelled so it can clean up the file. This is one of the reasons for the intermediate "upload" folder on the server: we never had to worry about IIS serving fragments of files. Other than cancellation, dropped connections will also leave broken files on the server. If the upload connections die, then we obviously won't be able to make another call to tell the server to clean up this file. Here is where the cleanup job finishes up for us.

This is a good transition to start looking at server code. We'll begin with the CancelFile method:

Code Listing 7

  1. [WebMethod]
  2. public string CancelFile(string path)
  3. {
  4. //initialization
  5. string result = string.Empty;
  6. path = Path.Combine(this.GetTempUploadPath(), path);
  7. //impersonation
  8. SPSecurity.RunWithElevatedPrivileges(() =>
  9. {
  10. try
  11. {
  12. //delete
  13. if (File.Exists(path))
  14. File.Delete(path);
  15. }
  16. catch (Exception ex)
  17. {
  18. //(error code omitted)
  19. }
  20. });
  21. //return
  22. return result;
  23. }

The interesting thing going on here is in Line #8 where I elevate to run as the SharePoint app pool account. This is important for two reasons (neither of which, of course, are what the RunWithElevatedPrivileges delegate is designed to do). First of all, we don't have to assign "Everyone" permissions on our folders. Second, it gets us around a potential IIS double hop issue, in case the GetTempUploadPath method (which is a wrapper around a config file call) returns a UNC path. (Kerberos is the right way to deal with IIS double hops, but that seems to be more configuration than most people - including me - are willing to deal with).

Next we have the method that accepts a chunk from the client and builds a file on the destination server:

Code Listing 8

  1. [WebMethod]
  2. public string UploadFileChunk(string path, byte[] data, bool isFirstChunk)
  3. {
  4. //initialization
  5. string result = string.Empty;
  6. Action<object> uploadCode = null;
  7. path = Path.Combine(this.GetTempUploadPath(), path);
  8. //impersonation
  9. SPSecurity.RunWithElevatedPrivileges(() =>
  10. {
  11. try
  12. {
  13. //determine if this is the first request
  14. if (isFirstChunk)
  15. {
  16. //delete
  17. if (File.Exists(path))
  18. File.Delete(path);
  19. }
  20. //determine if file exits
  21. if (File.Exists(path))
  22. {
  23. //add chunk to existing file
  24. uploadCode = (o) =>
  25. {
  26. //open file
  27. using (FileStream fs = File.Open(path, FileMode.Open))
  28. {
  29. //write chunk
  30. this.WriteChunk(fs, data);
  31. }
  32. };
  33. }
  34. else
  35. {
  36. //create new file
  37. uploadCode = (o) =>
  38. {
  39. //open file
  40. using (FileStream fs = File.Create(path))
  41. {
  42. //write chunk
  43. this.WriteChunk(fs, data);
  44. }
  45. };
  46. }
  47. //upload
  48. Utils.ForceRetryFunction<object, Exception>(() => { uploadCode(null); return null; }, "MediaService.UploadFileChunk", string.Concat("The following error occured while uploading ", path));
  49. }
  50. catch (Exception ex)
  51. {
  52. //(error code omitted)
  53. }
  54. });
  55. //return
  56. return result;
  57. }

There are a couple of things to note here. First of all, you'll notice that I use an Action delegate to pass the blocks of code that do the file writing (WriteChunk will be displayed shortly) to a method in Line #48 called ForceRetryFunction. This method takes in a Func (that is passed via an anonymous method in the invocation) which is basically processed in a while loop until it doesn't throw the type of exception (that is generically passed in as well). There are few other operations in the system that spawned SUESS, so I refactored it into ForceRetryMethod.

But WHY? Simply because certain operations need to be kicked in the ass to work. In this example, with I/O happening on several threads really fast, .NET can step on itself and open the file before it's properly closed (just like on the client). I'll put the code here for fun because it's a pretty cool algorithm, but the details are outside the scope of SUESS.

Code Listing 9

  1. public static T ForceRetryFunction<T, E>(Func<T> code, string sender, string description) where E : Exception
  2. {
  3. //initialization
  4. int sleep = 500;
  5. bool worked = false;
  6. T result = default(T);
  7. DateTime now = DateTime.UtcNow;
  8. //keep trying
  9. while (!worked)
  10. {
  11. try
  12. {
  13. //run the code
  14. result = code();
  15. //code successful
  16. worked = true;
  17. }
  18. catch (E ex)
  19. {
  20. //only try for one minute
  21. if (DateTime.UtcNow.Subtract(now).TotalMinutes > 1)
  22. {
  23. //unable to save
  24. //(error code omitted)
  25. worked = true;
  26. }
  27. else
  28. {
  29. //method blew up: sleep and try again
  30. lock (Utils._random)
  31. sleep *= Convert.ToInt32((1.5 + Utils._random.NextDouble()));
  32. //sleep exponentially
  33. Thread.Sleep(sleep);
  34. }
  35. }
  36. }
  37. //return
  38. return result;
  39. }

After all the trying, retrying, locking, and checking, the actual work that the server performs can be boiled down the simplest method in this section: WriteChunk. Here's the little guy:

Code Listing 10

  1. private void WriteChunk(FileStream fs, byte[] buffer)
  2. {
  3. //seek to end of the file
  4. fs.Seek(0, SeekOrigin.End);
  5. //write chunk
  6. fs.Write(buffer, 0, buffer.Length);
  7. }

Very straight forward, and unfortunately, a rather anticlimactic way to end our discussion of the SUESS Uploader. Once the file is up on the server, the Uploader's only remaining task is to kick off the Encoder stage. This is where all the really cool stuff happens, so stay tuned for the next post!

Have fun!

3 Tags

No Files

No Thoughts

Your Thoughts?

You need to login with Twitter to share a Thought on this post.