This was a good overview session on how Yammer can be used. They used the creation of the presentation content as a collaboration scenario. They showed how to bring various people into the conversation, including people external to the company. They also explained some of the UI elements like the feed and the groups list.
There was some questions regarding guidance as to when to use Office 365 Groups, SharePoint and Yammer. One of the attendee from Cargill mentioned that Yammer is for data in transit and SharePoint is for data at rest. Another attendee recommended to use each of the platform for their strengths: SharePoint for content storage, Yammer for discussions and OneNote for shared notes. I think both of those approaches are fairly similar and totally make sense.
All in all, it was a pretty straightforward session. I was expecting more out of it but it turns out that Yammer itself is pretty straightforward!
I had looked at Docker a bit a few months ago but I wanted to get an idea of how Microsoft was implementing containers. Containers at a high level are similar to application virtualization technologies. They are however implemented differently as it’s really built-in as part of the OS. There’s basically two ways they will implement the container runtime. The first one is by adapting the docker runtime by porting it to the Windows platform. The second one will be using Hyper-V as the technology to create the proper isolation boundaries.
Here’s an high level idea of the process to create a container:
1) Create a new blank container. Inside your container, you will basically see the traditional Windows file system content but it’s really independent from the underlying file system that is hosting the container
2) Inject anything you need in the container to run your application (framework version, other dependencies)
3) Inject your application
The container is immutable by nature meaning it will never change after its creation. Once it is tested, you can pretty much guarantee that it will run as planned once deployed.
When you want to run the application in your container, you simply instantiate the container and the container will make sure to run the command you specified after it’s instantiated. This will create a running version of your application inside the container. It’s important to note that all the data that needs to be stored, needs to be persisted outside the container. For instance, in the case of MySQL, you would need to store the data files in a network accessible area. As each container instance has it own network address (MAC and IP), its only way of communicating with the external world is through the network.
There are many workloads that are already working inside the containers in Windows like ASP.NET 5, Java, Nginx and MySQL to only name a few.
A great session with a LOT of demos from @VirtualPCGuy and @VirtualScooley, a nice change from some of the other session I attended. Here are some of the cool things that are coming in Hyper-V in Windows Server 2016:
– Full support for ReFS, which gives: instant fixed sized VHDX initialization and merge
– Hot-Add network adapters
– Online resize for shared VHDXs
– PowerShell Direct: Can issue PowerShell commands directly from the host to the guest VM
– Integration Services deployment through Windows Update
– Production checkpoints: can now checkpoint VMs in production and be supported by Microsoft
– Rolling cluster upgrade and manual VM version upgrade (can rollback to 2012 R2 without any downtime)
– Ability to change the memory size for static VMs
– Hot-Add VHD in Hyper-V replica
– Replica doesn’t use CSV level vss snapshots anymore!
– When storage issues arises, Hyper-V will now pause the VM automatically and resume it when the storage comes back online
– Storage QoS is cluster wide and you can now define “tenant” level capacity that can span multiple VMs
It was also nice to meet some of the great MVP I interact with on a regular basis on twitter, namely @WorkingHardInIT and @asyrewicze.
Good session about how Azure Service Fabric is deployed and managed. There was again a demo of their Chaos Testing service that was demonstrated that randomly crashes things in the cluster and allows you to see that your application keeps going. There are also a lot of customization options. I was also able to see how the platform performs automatic upgrades and rollbacks in the event of failures with bad code. It is also possible to perform A-B testing of applications through partitioning and just by simply creating two separate deployments.
There’s also a nice monitoring platform underneath that allows you to monitor the performance, health status and can even go up to the point of capture business level metrics. All of those capabilities are also extensible and you can also trigger real-time actions such as alerts or cluster actions to accommodate to certain events. It was also shown how you could surface that information in Azure Application Insights. It was interesting to see how diagnostic information was being collected from all the nodes in a central table repository in Azure Table which was then presented to Operational Insights for further analysis and troubleshooting. There are some basic events that are generated from the platform but your application can also custom events that get logged which you can refer to for faster troubleshooting.
You can currently run this technology on premise through the newly released SDK but it will be coming to the Azure Stack, which is good news! I’m still curious to see how that ties with Nano Server and containers…