A .meshproj file also imports $(MSBuildExtensionsPath)\Microsoft\Live Framework\v1.0\Microsoft.LiveFramework.targets which first ensures that your project has an index.html and then zips up the output directory, naming the zip file using the OutputName you specified.
You don’t need to use Visual Studio to do this. You can easily create your own Mesh app package by hand. At a minimum, your zip file must contain:
Index.html is the entry point for your app. Logo.png is the icon that will be displayed on your desktop and should be a 32-bit 256 x 256 png. Manifest.xml is your app manifest. A detailed description of the manifest configuration options is documented here.
I believe you can bundle anything you want in the zip file, although Microsoft supposedly runs an antivirus scan on the contents, and there may be additional checks for inappropriate content. Anything in the zip file gets downloaded to your computer when the app is installed on your local desktop. This is why my Flash app was able to run offline.
You might be able to use this to “install” an XBAP application that can run offline. To make it cross-platform, you could bundle the XBAP together with a “down-level” Silverlight version and choose which one to display based on what the client supports. If download size is a concern, it might be possible to put the executables in a DataFeed instead of in the app zip file and selectively sync only the version you want to display, but I haven’t dug into DataFeeds enough yet to see if this kind of per-client sync filtering is possible. Of course you would be working against the built-in versioning management if you did this (updates should only occur when the user closes and re-opens the app).
Ok, so uploading a zip file sounds nice and simple, right? Then why does Visual Studio want me to copy-and-paste the Application Self-Link URI? It turns out that if you use Visual Studio you only upload the zip file once per app. Once you’ve uploaded the zip and told Visual Studio about the Self-Link URI, Visual Studio will use that URI for subsequent deployments to upload the individual files directly.
If you watch Visual Studio using Fiddler (you’ll need to configure HTTPS support) you will see it query the Mesh for the resource feeds of your app, do HTTP DELETEs for each resource that was inside your zip file, and then do a bunch of POSTs to upload each item in your project. That seems a bit risky. What if Visual Studio dies before reposting all the resources it deleted? It seems like updating an app by manually uploading a zip file is a safer, slightly more atomic operation. It’s no big deal right now, but once real production apps are being upgraded, something more robust would be nice. I’m guessing we will see more explicit versioning, giving the user the choice of whether or not to upgrade. If such a feature is added, the direct app resource update trick might be useful for bypassing an explicit upgrade prompt.
The next time your Live Mesh client (MOE.exe) talks to the cloud, it will download the new versions of the files into your local app cache (AppData\Local\Microsoft\Live Framework Client\Bin\Moe2\MR\). For some reason I was unable to pinpoint the download traffic with Fiddler, so I can’t say for certain whether individual files are downloaded or if they are zipped up first. It appears older versions of files aren’t removed. This is probably to support the explicit user upgrade scenario in the future, but it seems like they could still be doing more cleanup.
I’m really curious why Visual Studio updates individual app resources rather than following the documented workflow of uploading a zip file with the updates. Anyone know?
Update: I posed this question in the comments on Danny Thorpe’s blog and he responded:
On your second question, the reason we upload files individually instead of uploading the zip file is because the REST API we’re uploading to doesn’t handle zip files. The dev portal that you manually upload your zip file to unzips the file and uploads the individual bits to the production storage. The Live Services REST APIs that the VS tools use to upload files goes (as far as I know) straight into the production storage.
In a nutshell, the dev portal that you see in your web browser is just a front end to the actual cloud service. VS doesn’t upload to the dev portal UI, it uploads to the cloud itself.
Keep in mind that the long side trip of manual steps that you currently have to go through to get a new app created and uploaded to the cloud will all be going away as soon as the cloud APIs to create and provision a new application are implemented.
He also explains the debugging versioning scheme in the comments, and I suggest you go read it for more great details.
One other related insight from Danny comes from this forum thread:
Our goal for the VS tools is to do all development against the local LOE and let the local LOE deal with sync'ing things back to the cloud. All the parts needed to do that aren't ready yet, so for the PDC CTP we redirected the VS tools to upload and debug mesh apps in the cloud.
This makes the current chattiness (and the “glue” dialog box) much more acceptable to me since the end goal is to use the local REST API rather than the cloud API.
Update 2: Danny has posted a thorough response to this post. There’s lots of great information there, so I won’t quote it all here. One “aha” moment for me was the concept of separate debug application resources. He also confirms that the “glue” dialog will be going away soon. Go read it for details. Thanks, Danny!