Community

What to expect from Tech Field Day 16

Theresa Miller - Tue, 02/20/2018 - 06:30

I was excited to receive an invitation to a Tech Field Day event in Austin, Texas. It’s a long way from my home in Brisbane, Australia, but so worth the trip. Here’s what to expect and how to watch. Tech Field Day 101 Started by Stephen Foskett, Tech Field Day is a completely unique concept. […]

The post What to expect from Tech Field Day 16 appeared first on 24x7ITConnection.

Red Hat and CoreOS Add to the Container Stack

Theresa Miller - Tue, 02/13/2018 - 06:30

Fans of the open source ecosystem will undoubtedly recognize the names Red Hat and CoreOS. Red Hat has has long been the leader of open source in the enterprise with a huge following of many community projects including Fedora, Ansible, Ceph, and many others. CoreOS has just been added to the list of acquisitions with […]

The post Red Hat and CoreOS Add to the Container Stack appeared first on 24x7ITConnection.

Severless for the traditional infrastructure admin

Theresa Miller - Wed, 02/07/2018 - 06:05

Serverless is the new buzzword everyone loves to hate. Is it a real thing? Is it a poorly named marketing buzzword? Is it something I need to learn about? What everyone thinks it is When I asked people on Twitter for a definition of serverless, I got some interesting answers.   Event-driven functional architectures — […]

The post Severless for the traditional infrastructure admin appeared first on 24x7ITConnection.

Predictions for the Data Center in 2030

Theresa Miller - Tue, 01/30/2018 - 06:05

How will we manage the data center in 2030? The history of tech tells us that a lot can happen in 12 years. For example, lets look at what has happened in the past 12 years beginning in 20016. Technology has changed significantly with advances that have included the iPhone, Facebook, Twitter, Flash storage drives, […]

The post Predictions for the Data Center in 2030 appeared first on 24x7ITConnection.

Building a Fast and Silent Workstation PC

Helge Klein - Tue, 01/16/2018 - 14:15

This article describes how to build a fast workstation PC that is almost completely silent (actually the fastest possible in terms of single-thread performance). It is based on a PC build published by German c’t magazine.

Why Single-Thread Performance is (Nearly) the Only Thing That Matters

There are many ways to evaluate CPU performance, but it basically boils down to a simple question: am I interested in single-thread or multi-thread performance?

The latter obviously yields a higher number and many benchmarks focus on the aggregate performance of all the cores in the CPU combined. But in a single-user machine multi-thread performance only rarely matters. The vast majority of software uses but a single thread (aka CPU core) for any significant calculations. That means in practice: if you want things to be done quickly, you need a CPU that has a high single-thread performance.

Multi-thread performance, while probably overrated, is not unimportant. There are quite a few applications out there that do use multiple threads at least part of the time to speed things up.

With the above in mind, we can define our ideal CPU: highest possible single-thread performance with good multi-thread performance. As it turns out, that CPU is currently Intel’s i7-8700K. No other x86 CPU matches its single-core speed, and with six cores total it is a more than decent multi-core contender, too.

Component Selection

Once I knew I wanted the i7-8700K I went to look for ways to run it at next to zero decibels. This is where c’t magazine comes in, probably the world’s only truly fantastic computer magazine left. c’t regularly publishes PC builds that focus on low noise emissions and energy efficiency. One of the best aspects of their builds is that they ignore unnecessarily complex techniques like water cooling or insulation, opting for noise reduction at the source instead. They basically put the fans in the right place of the chassis and test the hell out of the components to make sure none of them emit any unwanted noise.

Based on their suggestions I assembled the following components for my workstation PC build:

CPU:
Intel i7-8700K
Mainboard:
MSI Z370 Gaming Pro Carbon
RAM:
4 x Crucial DDR4 16 GB PC4-19200 non-ECC (64 GB in total)
SSD 1:
Samsung SSD 960 Pro M.2
SSD 2:
Samsung SSD 850 Pro (data disk from my previous machine)
CPU cooler:
Thermalright Macho Rev.B
Power supply:
be quiet! Pure Power 10 400 Watt
Case:
be quiet! Pure Base 600
GPU:
MSI GeForce GTX 1060 Gaming X 6G 6GB

A “gaming” mainboard would not have been my natural choice. Neither do I really need a dedicated GPU. Unfortunately, none of the currently available mainboards for Intel’s 8th generation (Coffee Lake) CPUs are equipped with two DisplayPorts. As we will see below, this choice of mainboard and GPU negatively affects power consumption.

Assembling the Components

The actual build is pretty straightforward. However, you should move the front fan (which is basically useless where it is located) to the back of the case’s top, so that the heat is blown out to the top as well as to the back (by the second preinstalled fan). The case’s top plastic cover can easily be shortened with a saw so that it does not cover the new top-blowing fan.

Here is a view from an intermediate stage before the mainboard was put in the case:

BIOS Settings

First of all, update the BIOS to the latest version, then load the defaults. With that in place, configure the following settings:

  • Settings > Advanced > Windows OS Configuration > Windows 8.1/10 WHQL Support: enabled
  • Overclocking > CPU Features > Intel C-State: enabled
  • Overclocking > CPU Features > Package C-State Limit: C8
  • Overclocking > CPU Features > C1E Support: enabled
  • Hardware Monitor -> CPU 1: 40°/15%, 60°/30%, 70°/60%, 85°/100%
  • Hardware Monitor -> System 1: 30°/4.08V, 60°/5.04V, 70°/7.08V, 85°/12V
  • Hardware Monitor -> System 4: 30°/4.08V, 60°/5.04V, 70°/7.08V, 85°/12V
Power Consumption

A modern PC’s power consumption is highly volatile, changing many times per second depending on the workload. The most important value, however, is idle power consumption, because no matter how furiously you work, your machine will be idling a large part of the time.

There have been great improvements in idle power consumption in the past years. As an example, c’t published a build for an 11 Watt PC in December 2016. This i7-8700K build, unfortunately, is not as efficient. The best I observed is a little more than 36 Watts. The typical idle power consumption is around 40 Watts.

I blame Intel’s Z370 chipset, the only chipset currently available for Intel’s 8th generation (Coffee Lake) CPUs. According to other people’s measurements, the GPU consumes a bit less than 10 Watts when idle. Interestingly, it does not matter whether you connect one or two (4K) displays to the GPU. There is not even a significant change in power consumption if you connect a single display to the mainboard’s Intel graphics instead.

One caveat to be aware of: in sleep mode, the system consumes 12.6 Watts, which is about 12 Watts more than it should. This is the case even with the ErP Ready BIOS setting enabled, which is supposed to make the system conform to the EU’s environmental regulations. I worked around this inefficiency by configuring Windows to hibernate when the power button is pressed instead of sleeping. During hibernation the system only consumes 0.2 Watts.

Noise Emissions / Silence

I do not have the equipment to measure noise emissions, but I can say that the machine is nearly completely silent. Only when there is no ambient noise at all can a very unobtrusive mid-frequency ventilator humming be heard. This does not even change during prolonged periods of high load.

CPU Performance

There are many different ways to measure CPU performance. I find Cinebench to be a useful indicator. It has both single-core and multi-core benchmarks. The single-core result of 203 is even slightly higher than expected. The multi-core result of 1414 is only surpassed by some of AMD’s Ryzen CPUs and by Intel’s expensive i9 processors. Cinebench results were measured before any Meltdown/Spectre patches were applied, by the way.

The post Building a Fast and Silent Workstation PC appeared first on Helge Klein.

The Office 365 roadmap for 2018

Theresa Miller - Tue, 01/16/2018 - 06:30

Disclaimer: While the author is a Microsoft MVP in the Office Servers & Services category, this article is provided based on information publicly available at https://products.office.com/en-US/business/office-365-roadmap?filters=#abc No spoilers here, sorry! The official Microsoft roadmaps are moving targets. They give us some indication of what to expect in the Microsoft products and when to expect them, […]

The post The Office 365 roadmap for 2018 appeared first on 24x7ITConnection.

Hyper-V Backup – Products & Options

Helge Klein - Wed, 01/10/2018 - 14:14

It’s 2018, and backing up Hyper-V hosts is solved, right? I certainly expected it to be when I took on the task of finding a simple and reliable way to back up three hosts colocated in a large datacenter. This article summarizes my journey and the findings resulting from it.

Backup Products

It became immediately obvious that Windows Server 2016 does not come with any useful backup functionality for Hyper-V. Looking for commercial products, I found the following three candidates looking promising:

  • Altaro Backup 7.5
  • Nakivo Backup & Replication 7.3
  • Veeam Backup & Replication 9.5 Update 2

There are other products out there, of course, but looking at their websites none of them seemed enticing enough for me to include them in a trial.

My Requirements

My expectations for a Hyper-V backup product are moderate enough, I believe. I do not want to become a full-time backup administrator. I have precious little time as it is. The products we use for our IT need to “just work”. At least, that is the theory.

Requirement #1: Good UX

First of all, I need a backup product to be easy to install and use. The UI should ideally be self-documenting so it can be used without studying manuals, FAQs and KB articles.

Requirement #2: Reliability

Secondly, a backup software needs to do its job, reliably, every single day. Backups are fail-safes. It cannot have the fail-safes fail.

Requirement #3: Efficiency

This is more of a nice-to-have than a hard requirement, but it would sure be nice to have a product that does its job swiftly while efficiently deduplicating the backup storage. Also, a light agent and a small installation footprint are always a plus.

Where to Back Up To? Backup Target Option #1: SMB Share

I intended to store the backups on an SMB file share which the company that is hosting the servers is offering at low cost. I had used that type of SMB share for manual backup jobs for several years.

Backup Target Option #2: Dedicated Backup Server

Some products offer to host the backups on a dedicated backup server. That was never an option for me. Our environment is relatively small, and standing up and managing another physical server just to host backups was out of the question. I was looking for a simple, elegant solution, not additional complexity.

Backup Target Option #3: Cloud Storage

Once you think about it, this seems to be the obvious choice, at least for Hyper-V hosts located in a datacenter with high-speed internet connectivity (like ours). Storing (encrypted) backups in Amazon S3 or Azure Storage solves multiple problems at once: the backups are located in a physically different site, vastly improving disaster recoverability while at the same time reducing the dependency on one hosting provider. With backups stored in the cloud moving VMs between datacenters becomes so much easier.

The reason why I did not choose this option is as sad as it is simple: none of the products support cloud storage as primary backup targets. This may (and probably will) change in future versions, but today not even Microsoft’s Azure Backup Server has the capability to send backups directly to Azure. As the docs state:

In the current architecture of Azure Backup Server, the Azure Backup vault holds the second copy of the data while the local storage holds the first (and mandatory) backup copy.

Notes From My Product Evaluations

Following are the notes from my trial installations of the three products mentioned earlier, two of which I used for several weeks each before writing this.

Veeam Backup & Replication 9.5 Update 2 First Attempt

This was the first product I tried, simply because Veeam is the largest and most well-known brand out there. During my initial installation attempt, I was a little shocked about the huge 2 GB download. When I was finally presented with the product’s console, I could not find any way to add an SMB share as a backup target. As it turned out in my second attempt several weeks later, you have to locate a tiny little chevron-style button placed near the bottom of the screen:

Once you manage to find and click that, a dialog pops up that lets you enable the “backup infrastructure” tab. Why that is not enabled by default is beyond me. Veeam could probably double their sales by that simple change.

As it stands I failed to locate the magic button during my first attempt, so I uninstalled the product, which is a tedious procedure because Veeam litters the Programs and Features window with a good half dozen entries.

Second Attempt and Verdict

Only after I had tried Altaro and Nakivo and failed did I look at Veeam again. During this second run, I located and skimmed a manual which, if printed out, would dwarf any city’s phone book. My newfound knowledge enabled me to find the magic button and configure the product optimally for our needs.

Veeam Backup & Replication is certainly powerful, but unnecessarily complex. Related settings are spread across different wizards, dialogs and other parts of the UI. However, once configured, it seems to be a reliable workhorse.

Cloud storage as the primary backup target is not supported as of version 9.5. I found a note on a blog that support for object-based storage is to come with version 10, so, hopefully, that will give us direct cloud backup.

My Recommended Veeam Configuration for Hyper-V

For optimal backup consistency with a diverse set of Windows and Linux VMs, I recommend configuring Veeam so that application-consistency is attempted first, crash-consistency second.

Application-consistent backups require Veeam to trigger VSS in the VM, so admin rights in the guest are mandatory for this to work. Sometimes VSS gets confused and fails, or you do not have admin credentials for all VMs. That is when you need the crash-consistent variant as your second option, which basically backs up a VM as if it had suffered a sudden power loss. Unfortunately, this two-phased approach is not enabled by default. To configure it, you have to dig pretty deep:

Hyper-V crash-consistency needs to be specifically enabled in a different dialog. While you are there make sure to switch on changed block tracking (CBT):

Altaro Backup 7.5

With 273 MB is Altaro is a nice small download. Installation is similarly quick. The UI is so intuitive that a product manual is not required. Altaro employs the sensible backup strategy of trying application-consistent first, crash-consistent second, by default.

All in all, I was really happy with Altaro until it started failing every single time with the error The backup location contains too many bad data blocks. Error code DEDUP_064:

This happened with two different SMB shares, in the first case after two to three weeks, in the second case after only a few days. I suspect it is caused by an incompatibility with the Linux-based SMB shares our provider offers. Altaro’s support was not able to find a solution. It would be nice if they offered a tool or script to test a backup target for compatibility (which they do not). In any case, Veeam seems to be able to work with the SMB shares just fine.

With regards to cloud backup, Altaro can send secondary copies to Azure, but that capability is reserved for the most expensive Unlimited Plus edition.

Nakivo Backup & Replication 7.3

With just 192 MB Nakivo is an even smaller download than Altaro. That, unfortunately, is the most positive thing I have to say about it.

The installer leaves critical tasks to the admin which do not even seem to be documented:

  • WS-Man needs to be enabled over the network and firewall rules need to be added
  • The installer creates a self-signed certificate that browsers reject

Once you are past the mandatory manual WS-Man configuration and try to access the product’s console in IE (IE being preinstalled on Windows Server) you are greeted with a recommendation not to use IE.

When I tried to add the SMB backup target for which I had a username, a password, and a UNC path, specifying the username turned out to be more of a challenge than I had anticipated:

  • Just the username did not work
  • “.\username” did not work either
  • “user\user” finally did the trick

Again, that does not seem to be documented. Altaro and Veeam did not have any issues authenticating to the share.

Connecting to a second share did not work at all, not even with the “user\user” hack. I verified the credentials were OK by mapping a drive on the command line, which was no problem, of course.

As for the UI, I found Nakivo’s console to be slow and badly designed.

Previous versions of Nakivo required SMB1, which the current 7.3 finally got rid of. Now SMB2 is used instead.

Conclusion

Altaro is a nice product, easy to use, yet very capable. Give it a try if you plan to store your backups on a Windows machine. Veeam is the workhorse whose power comes with quite a bit of complexity. Nakivo cannot be recommended at this point.

Direct backup to the cloud is a feature I would like to see in the future. Today, cloud storage can only be used as a secondary backup target.

The post Hyper-V Backup – Products & Options appeared first on Helge Klein.

Three Simple 2018 Information Security Resolutions

Theresa Miller - Tue, 01/09/2018 - 06:30

New Years Resolutions.  Some people love them, and some people hate them. Some stick to them throughout the year, and others have already ignored them at this point in January. That being said, I have put together three simple 2018 Information Security resolutions almost everyone can benefit from. There’s also a plethora of blog posts […]

The post Three Simple 2018 Information Security Resolutions appeared first on 24x7ITConnection.

Get latest Citrix Receiver version with PowerShell

Aaron Parker's stealthpuppy - Tue, 01/09/2018 - 01:57

I’ve previously written about deploying Citrix Receiver to Windows 10 via Intune with PowerShell. I wrote a script that will detect an installed version of Receiver and update to the latest version if it is out of date. To start with, I’ve hard-coded the current Receiver for Windows version into the script; however, that’s not necessarily the best approach, because it will require updating whenever Receiver is updated.

The Citrix Receiver download page provides a source for querying Receiver versions for all platforms, so if we parse that page, we have a source for the latest Receiver versions for all platforms.

I’ve written a script that will parse the downloads page and return the current Receiver version for each platform unless a login for that platform is required. If you’re looking to find the Receiver version for Windows, Windows LTSR, Linux, Mac etc., the script can be used to return the version number for the desired platform.

Here’s the script:

To use the script, save as Get-CitrixReceiverVersion.ps1 and run from the command line. With no parameters, it will return the version number for Citrix Receiver for Windows:

.\Get-CitrixReceiverVersion.ps1

The script returns specific platforms with the -Platform parameter. This only accepts valid values, such as ‘Windows’, ‘Mac’ and ‘Linux’ and the script will validate those values and supports tab completion.

.\Get-CitrixReceiverVersion.ps1 -Platform Mac

Here’s the script in action:

Get-CitrixReceiverVersion.ps1 returning the latest version for various Receiver platforms

I’ve written this primarily for my purposes, but perhaps there are other purposes that I’ve not yet considered. Feedback, issues and improvements to the script are welcome.

This article by Aaron Parker, Get latest Citrix Receiver version with PowerShell appeared first on Aaron Parker.

Categories: Community, Virtualisation

UK Citrix User Group 2018 – Birmingham meeting

Citrix UK User Group - Fri, 01/05/2018 - 16:00

Our 24th XL event will take place in Birmingham on 15th February 2018 Venue IET Birmingham Austin Court 80 Cambridge St B1 2NP Birmingham Agenda (hover over session title for more details) 09.00 Registration opens. Coffee and pastries 09:30 News and updates – …

Read more »

The post UK Citrix User Group 2018 – Birmingham meeting appeared first on UK Citrix User Group.

The Top Technology Predictions for 2018

Theresa Miller - Thu, 01/04/2018 - 06:30

2017 was a great year in IT bringing much change in the focus of how businesses are consuming IT.  For example, cloud for many organization is no longer if but when.  We are also now looking at data analytics, virtual reality, artificial intelligence in new ways that benefit the business and not just in ways […]

The post The Top Technology Predictions for 2018 appeared first on 24x7ITConnection.

Folder Redirection to OneDrive on Windows 10 with Intune

Aaron Parker's stealthpuppy - Wed, 12/27/2017 - 11:15

If you’re deploying Windows 10 with Modern Management (Azure AD joined, MDM managed), you’ll likely have wondered about data protection – if users aren’t intentionally saving documents to their OneDrive folder, those documents are likely not synchronised and therefore not protected against data loss.

Traditionally managed PCs will have folder redirection (and offline files) so that user’s documents are synchronised when corporate network connectivity is restored. Some organisations will even have implemented folder redirection into the OneDrive folder via Group Policy, as a better alternative.

Implementing folder redirection for Windows 10 via Intune currently isn’t possible, so we need a creative solution to this challenge. With PowerShell scripts available to deploy via Intune, we can create a custom approach for redirecting important folders into OneDrive.

How Folder Redirection Works

Here’s an old, but a good article that covers how the Folder Redirection Extension works. It was written for Windows XP / Windows Server 2003, but the concepts are still the same in 2017. The article includes the following overview of folder redirection:

Folder Redirection processing contains five steps:

  1. Determine which user folders to redirect based on changes to Group Policy settings at the time of logon.
  2. Determine the target location specified for redirection and confirm the user has access rights to that location.
  3. If the target folder does not exist, the folder is created and the appropriate access control list (ACL) rights are set.
  4. If the folder exists, access rights and folder ownership are checked.
  5. If desired, the files contained within specified folders are moved to the new location, which also deletes them from the source folder if the source folders are local.

In this case, because we’re looking to redirect folders with the source and destination in the user profile on a local disk, we can skip steps 2, 3, and 4. Step 1 is obviously our main requirement and step 5 – moving existing data into the new folder on the same disk, should be quick and reasonably safe on modern PCs with SSDs.

Given that we don’t have Group Policy available to us, we need to implement steps 1 and 5 in such a way that we can be sure the redirection and move of data will be successful.

Implementing folder redirection in PowerShell 

A script that implements folder redirection using SHSetKnownFolderPath is available from here: SetupFoldersForOneDrive.ps1. This defines a function called Set-KnownFolderPath that can be used to redirect a known folder of your choosing to a target path and it works quite well. In its current iteration though, all it does is redirect the folder. 

Because we also need to move the folder contents, I’ve forked the script and added some additional functionality:

This version of the script updates the Set-KnownFolderPath function to ensure all known folders for Documents, Pictures etc. are covered and adds:

  • Get-KownFolderPath – we need to know what the existing physical path is before redirecting the folder
  • Move-Files – a wrapper for Robocopy.exe. Rather than implement the same functionality of Robocopy in PowerShell, the script references it directly to move the contents of the folder to the new location. This ensures that we also get a full log of all files moved to the new path.
  • Redirect-Folder – this function wraps some testing around the redirect + move functionality
  • Reads the OneDrive for Business sync folder from the registry to avoid hard-coding the target path
  • Implements redirection for the Desktop, Documents and Pictures folders.

My script could do with some additional error checking and robustness; however, it provides the functionality required to redirect specific folders into the OneDrive folder and can be re-run as required to ensure that redirection is implemented for each folder.

Deploying with Microsoft Intune

Intune allows you to deploy PowerShell scripts that run either in the user’s context or in the Local System context. 

Intune PowerShell script settings – user context. Not what we want.

Implementing the redirection script in the user context though fails when adding the SHSetKnownFolderPath class to the script session. Additionally, deploying the script in this manner will only run the script once – if the OneDrive client is not configured correctly when the script runs, the folder redirection will then never work.

Instead of deploying the folder redirection script with Intune, we can instead deploy a script that downloads the folder redirection script to a local path and creates a scheduled task that runs at user login to run the script. That way, we can be sure that the redirection script will run after the user has logged into the OneDrive client and set up the local sync folder in their profile. Additionally, this approach will enable folder redirection to run for any user logging onto the PC.

The script below will download the redirection script to C:\ProgramData\Scripts, create the scheduled task and output a transcript into the same folder.

Note that this downloads the redirection script from my public gist repository. If you implement this in production, I would highly recommend a more secure source for the redirection script.

Right now this script is quite simple – it will need to be updated to remove or update an existing script in the event you need to remove the script from Intune and re-add it.

To deploy the script via Intune, save it locally as Set-RedirectOneDriveTask.ps1 and add as a new PowerShell script under Device Configuration. Ensure that the script runs as Local System by setting ‘Run this script using the logged on credentials’ to No. This is required for creating the scheduled task. 

Adding the Create OneDrive Redirect Task script to Intune

Assign the script to a user or device group and track deployment progress in the Overview blade. A successful deployment will result in a scheduled task on the target PCs. 

OneDrive Folder Redirection Task Properties

When the script is downloaded and the task is created successfully, you’ll see the script and a transcript in C:\ProgramData\Scripts.

The downloaded folder redirection script

When the folder redirection script runs Robocopy to move documents, it will log those moves to %LocalAppData%\RedirectLogs.

Data copy/move logs

When implemented in this way, the script will run at user login and successfully implement folder redirection into the OneDrive for Business sync folder. The user will see a PowerShell script window (even though it’s set to hidden) – this could be fixed by pointing the scheduled task to a VBscript wrapper.

Configuring OneDrive

OneDrive should be configured for single sign-on for the best user experience. Not necessarily a requirement; however, it will make it quicker for users to be up and running and therefore quicker for the script to redirect the target folders.

Given the approach outlined in this article, it’s unlikely that the user’s folders will be redirected on the first login. Adding a delay to the scheduled task may allow redirection to work on the first run; however, this would require several tasks to run in order and Intune won’t necessarily run all tasks in the required order.

Summary

In this article, I’ve outlined an approach to implementing folder redirection with PowerShell, via Intune, into the OneDrive for Business sync folder. This uses a script deployed from Intune to Windows 10 Azure AD joined machines to download the folder redirection script and create a scheduled task that runs at user login to perform the redirection and data move.

Redirecting the Desktop, Documents and Pictures should provide protection for the key user folders. While redirecting additional documents is possible, they can often contain data that would be less this ideal for synchronising to OneDrive.

Redirected Documents folder in the OneDrive sync folder

The scripts I’ve posted here are provided as-is and I highly recommend testing carefully before implementing in production.

Bonus 

The folder redirection script will work for any enterprise file and sync tool, not just OneDrive for Business. For example, if you wanted to redirect folders into Citrix ShareFile, just read the PersonalFolderRootLocation value from HKCU\Software\Citrix\ShareFile\Sync to find the sync folder.

This article by Aaron Parker, Folder Redirection to OneDrive on Windows 10 with Intune appeared first on Aaron Parker.

Categories: Community, Virtualisation

Deploy Citrix Receiver to Windows 10 with Intune and PowerShell

Aaron Parker's stealthpuppy - Sat, 12/23/2017 - 01:47

If you’ve Windows 10 Modern Management you’ll know that some applications present a challenge for deployment via Intune (or any MDM solution), because Windows 10 MDM supports the deployment of Win32 applications via a single MSI only. Applications such as Citrix Receiver, that are a single EXE (that wraps multiple MSI files), can be particularly challenging. You can create a custom wrapper to deploy Receiver, but this requires a packaging tool and some specific knowledge on how to package applications.

Microsoft Intune now supports deploying PowerShell scripts to Windows 10 machines, which can provide a more flexible framework for deploying complex applications. For Citrix Reciever, we can use this approach to target Windows 10 PCs for downloading the latest version of Receiver directly from Citrix and install it with any required command line options. This ensures that devices always install the latest version and the Intune administrator only ever has to create a single deployment option via a PowerShell script.

Installing Citrix Receiver

Here’s a simple script to detect whether Receiver is installed and if not, download and install Receiver using a specific set of command line options.

The script could be extended with some additional error checking and logging to provide some additional auditing of the installation, but I have tested this successfully.

Deploying via Intune

Deploying the script via Intune is done just like any other PowerShell script. Save the script locally and then in the Azure Portal, Intune blade, under Device Configuration / PowerShell scripts, add a new script and upload the saved script.

Adding the Install-CitrixReceiver.ps1 script to Intune

Assign the script to an Azure AD group for target users or devices. Your script should then be listed as an assigned script. 

Install-CitrixReceiver.ps1 alongside other PowerShell scripts

Once deployed, we can track successful installations in the Overview blade. Note that the script will only run once per target device – it should be unlikely that the device will receive the script and have it fail (e.g. fail to download the CitrixReceiver.exe), but there could be edge cases where installation fails as a result of some very specific circumstances.

Citrix Receiver deployment overview

Post-deployment, we can rely on the updater functionality built into the latest Receiver releases to keep end-points up to date.

Summary

We used a simple approach to the deployment of a non-MSI application to Windows 10 via Intune with a PowerShell script. A simple example that enables deployment of Citrix Receiver with no special packaging and we can be sure that because the end-point downloads Reciever directly from Citrix, the latest version will be deployed each time.

This article by Aaron Parker, Deploy Citrix Receiver to Windows 10 with Intune and PowerShell appeared first on Aaron Parker.

Categories: Community, Virtualisation

Browser Ad Blockers and Privacy

Helge Klein - Mon, 12/18/2017 - 16:40

You have probably been in this situation: on some shopping site you put an article in your cart, but decide not to buy it after all. Later on, you notice that you are getting targeted ads for the exact same product on totally unrelated sites – or so you think. There is, however, a common denominator: the ad network. It tracks you quite effectively as you move from site to site. Many people are not exactly happy about that and turn to ad blockers to guard their privacy. This article looks at one way to measure the ad blockers’ effectiveness in terms of keeping their users’ privacy.

How to Measure Privacy

The method I used to measure how well the user’s privacy is kept is quite simple. It is based on the principle that data leakage should be avoided at the source. Ad networks cannot track you if they do not have any information about you.

With the help of our user experience and application performance monitoring product uberAgent I determined the number of different internet hosts a website communicates with. Each of those hosts has the ability to track the user – thus, the smaller the number, the better.

An Example

Opening the homepage of the New York Times in a browser without ad blockers just once resulted in communications with 84 different hosts! The list includes various ad networks and marketing analytics companies (Bizo, BlueKai, Bombora, Chartbeat, DataXu, DoubleClick, eXelate, Eyeota, Ixi, Keywee, Krux, MarkMonitor, Media6Degrees, Neustar, RapLeaf, RU4, Undertone, VisualDNA), Facebook, Google, LinkedIn, Pinterest, Twitter, and Yahoo. All of those companies can see that you accessed nytimes.com. In other words:

With a single visit to the homepage of the New York Times you become trackable by 18 advertisement/marketing companies and 6 social networks.

The full list of sites nytimes.com communicates with is quite long. Click the thumbnail on the right to see the full-size version.

Measurement Procedure

For a somewhat robust measurement of the effectiveness of ad blockers with regards to privacy, I opened the world’s top 30 sites, determined the number of hosts they communicated with and compared the numbers for configurations with and without ad blockers.

Sites I Tested With

I selected the 30 sites by Alexa ranking, ignoring multiple country-specific domains per organization. Following is a full list of sites:

“Western sites” (mostly US companies):

  • https://www.google.de
  • https://www.youtube.com
  • https://www.facebook.com
  • https://www.wikipedia.org
  • https://de.yahoo.com
  • https://www.reddit.com
  • https://www.amazon.com
  • https://twitter.com
  • https://outlook.live.com
  • https://www.instagram.com
  • https://linkedin.com
  • https://netflix.com
  • https://ebay.com
  • https://de.pornhub.com
  • https://www.bing.com
  • https://www.twitch.tv
  • https://www.xvideos.com
  • https://www.microsoft.com

“Non-western” sites:

  • https://www.baidu.com
  • http://www.qq.com
  • https://world.taobao.com
  • https://www.tmall.com
  • https://vk.com
  • https://www.sohu.com
  • http://www.sina.com.cn
  • https://global.jd.com
  • https://weibo.com
  • https://360.cn
  • https://yandex.ru
  • https://aliexpress.com
Test Procedure

The test procedure went as follows:

  1. Deleted the entire browser history (including cookies and the cache)
  2. Opened the top 30 websites from bookmarks at the same time
  3. Cycled through all tabs
  4. Waited for 2 minutes
  5. Counted the number of sites contacted
  6. Repeated above procedure three times per test configuration
Test Configurations

I performed the tests for the following configurations

  1. Firefox 57 (default configuration)
  2. Firefox 57 + uBlock Origin
  3. Firefox 57 + Adblock Plus

Adblock Plus is more widely installed whereas uBlock Origin has a reputation for being fast and efficient. I used Firefox for these tests but I expect the results to be very similar to other browsers.

Results #Hosts Communicated With – All Sites

The chart below shows the total number of distinct hosts the world’s 30 top sites communicated with when each site’s homepage was accessed just once:

#Hosts Communicated With – “Western” Sites

The chart below shows the total number of distinct hosts the top 18 “western” sites communicated with when each site’s homepage was accessed just once:

Conclusion

As you can see in the charts above, ad blockers drastically reduce the number of hosts a website communicates with, improving privacy significantly. uBlock Origin lives up to its reputation of being more effective than Adblock Plus. Both ad blockers seem to be better optimized for “western” sites.

Comparing the top 30 sites to the New York Times, a typical media site, one cannot fail to notice that media sites apparently are much more notorious in terms of tracking and analytics than other types of sites.

The post Browser Ad Blockers and Privacy appeared first on Helge Klein.

Introduction to the CISSP Certification

Theresa Miller - Thu, 12/14/2017 - 06:30

Over the last decade we’ve seen an increased awareness in of information security in organizations.  This has been accelerated by stories about everything from viruses to hackers hitting the news.  This, of course does not account for those stories which do not make it to the front page.  If you aren’t familiar with information security, […]

The post Introduction to the CISSP Certification appeared first on 24x7ITConnection.

Keep Your Sanity: 5 Ways to Slow Down

Theresa Miller - Tue, 12/05/2017 - 22:37

We are going through yet another time of intense change in the IT industry. Since it is the end of the year, the pundits will start pontificating their view of how the world will look once we get through this transition period. The data center is dead! If you’re not on the public cloud your […]

The post Keep Your Sanity: 5 Ways to Slow Down appeared first on 24x7ITConnection.

Browser Video: Codecs, Formats & Hardware Acceleration

Helge Klein - Mon, 12/04/2017 - 14:13
Contents

Web video is ubiquitous. We take it for granted that browsers play video in high resolution, over any connection, on any device. Behind the scenes, a complex machinery of video formats, codecs, and GPU acceleration techniques is at work to make it all happen. This post explains what is what.

Video Container Formats

Video containers store audio and video tracks, which may be encoded in a variety of codecs. Two video container formats are being used on the web today: MP4 and WebM.

MP4

MP4 contains audio encoded in AAC or MP3 and video encoded in H.264 (AVC) or H.265 (HEVC). The MP4 format is not royalty-free.

MP4 containers with H.264 video are supported in basically any browser:

MP4 containers with H.265 video (HEVC), on the other hand, are only supported in Edge and Internet Explorer, and only on devices that can decode H.265 in hardware. The reason is simple: Edge and IE do not decode H.265 by themselves, so Microsoft does not have to pay licensing fees. Edge/IE pass the video stream for decoding to the OS which in turn sends it to the GPU. Licensing costs are paid by the GPU vendor.

WebM

WebM contains audio encoded in Ogg Vorbis and video encoded in VP8 or VP9. The WebM format is royalty-free.

WebM containers are supported in Chrome and newer versions of Edge and Firefox only. Internet Explorer does not support WebM.

Hardware-Accelerated Video Decoding

GPUs have dedicated units that are much more efficient at decoding video than general-purpose CPUs which lack similar capabilities. However, for hardware decoding to work, all the components involved need to work hand-in-hand:

  • The GPU’s decoder needs to support the video properties (codec, 4K resolution, HDR)
  • The graphics driver needs to make the GPU’s capabilities available to the OS
  • The operating system needs an abstraction layer so that applications do not have to deal with individual vendor driver APIs
  • The application (the browser) needs to make use of the OS’ decoding API
DirectX Video Acceleration API 2.0

The Windows API of choice for accessing the machine’s hardware video decoding units is called DirectX Video Acceleration 2.0. DXVA is a little-known API; not many tools exist for exploring it. One of the few is DXVA Checker. It shows the video capabilities reported by the graphics drivers.

The screenshot below depicts the capabilities of the integrated Intel HD Graphics 620, built into the Kaby Lake i7-7500 CPU. Video decoding is supported for H.264, H.265 (HEVC) and VP9 (all at least up to 4K).

H.264 Hardware Acceleration

Hardware-accelerated decoding of H.264 (AVC) video is available on all CPUs/GPUs sold in the past 4+ years. 4K resolution, however, is only supported by newer hardware sold in the past 2+ years, approximately.

H.265 Hardware Acceleration

Hardware-accelerated decoding of H.265 (HEVC) video is available only on newer CPUs/GPUs. More specifically, at least the following are required:

  • Intel Skylake
  • Nvidia Maxwell (later models) or Pascal
  • AMD Fiji or Polaris
VP9 Hardware Acceleration

Hardware-accelerated decoding of VP9 video is available only on the newest generations of CPUs/GPUs. At least:

  • Intel Kaby Lake
  • Nvidia Maxwell (later models) or Pascal
  • AMD Polaris
Checking the Browser’s Acceleration Status

Chrome reveals which video codecs are hardware-accelerated on the current platform at the special URL chrome://gpu.

The screenshot above was taken on a device with the Intel Kaby Lake CPU i7-7500U. Hardware decode support for VP9 is available (shown as VPx).

Firefox provides some information at about:support, but does not show which codecs are accelerated. There is no known way to determine the acceleration status for Edge or IE.

Codecs Used by Video Sites

Almost all video websites use the H.264 codec, and for a very simple reason: H.264 ist most broadly supported in browsers and on mobile devices.

There is just one exception: YouTube, when accessed from Chrome or Firefox, serves VP9 encoded video.

That brings us to an important caveat: YouTube videos in Chrome or Firefox are inefficiently decoded in software unless the playback device is equipped with one of the newer GPU models.

The post Browser Video: Codecs, Formats & Hardware Acceleration appeared first on Helge Klein.

Improving Ivanti Application Control Message Boxes

Aaron Parker's stealthpuppy - Sat, 12/02/2017 - 12:37

Ivanti Application Control (previously AppSense Application Manager) is an application whitelisting and privilege management solution; however, I think you’re likely aware of that since you’re reading this article. Application Control has a number of customisable message boxes that are displayed to the end-user for Windows application whitelisting or privilege elevation scenarios. In this article, I’ll discuss improving the end-user experience with some visual flair and text.

Default Message Boxes

Let’s take a look at a typical message box. Below is the default Access Denied message displayed to users on Windows 10 when attempting to start an application that hasn’t been white-listed.

Ivanti Application Control default access denied message box

With apologies to Guy Leech (the original developer of AppSense Application Manager), this message box doesn’t fit with Microsoft’s recommended Windows 7 or Windows 10 desktop UI guidelines nor display anything useful to the end user that is useful or actionable. Side note – on Windows 10, I’d love to see this particular message as a notification instead because there’s no immediate action the user can take.

Here’s another message box – this one is shown for privilege escalation. Similar in a sense to a UAC dialogue box, but this forces the user to complete the action for elevating an application with a reason for taking that action that can be audited.

Ivanti Application Control default self-elevation message box

There are several scenarios where Application Control may display a message to the end user:

  • Access Denied – execution of an application is denied
  • Application Limits Exceeded – the end-user is prevented from running multiple instances of an application
  • Self-Elevation – an end-user can elevate an application via Application Control instead of being granted administrative rights
  • System Controls – the user can be prevented from uninstalling an application, clearing specific event logs or stopping services
  • Time Limits – time limits can be put on application execution
  • Self-Authorization – end-user can be given the ability to whitelist an application themselves
  • Network Connections – controls can be placed on network destinations, paths or ports

So, potentially a reasonable level of interaction with the end-user and thus Application Control can have some impact on the perception of a user’s everyday experience. Fortunately, each of these message boxes is almost fully customisable – Ivanti provides the administrator with the ability to control both the appearance and text in the message to something that may suit a specific requirement or the environment into which it is deployed.

Creating “Good” Message Boxes

Dialog boxes suck (or at least a good chunk of them do). To understand why here’s an excellent article I recommend reading – The Magic of Flow and Why Dialogs Frustrate People. The dialogs interrupt user workflow and it’s safe to assume a user is typically seeing multiple messages in a single session (not just our Application Control messages).

Application Control supports customising the messages as well as the UI with HTML and CSS. With customisable notifications, the Application Control administrator effectively becomes a UX designer; therefore to provide users with the best experience possible and balance security needs of the organisation, we should consider carefully that experience both visually and narratively in the text displayed to the user.

When customising these I recommend paying careful attention to the language and tone of the text. Empowering a user to take the right, or no, action without generating unnecessary service desk calls is important. Here are my 3 recommendations for customising these messages boxes for an environment:

  • Ensure the message boxes fit with Microsoft UX guidelines for Windows – apart from not visually assaulting the senses, fitting in with the standard Windows visual style will provide users with a sense that these messages are a part of the normal Windows desktop workflow
  • Don’t overwhelm the user with explanatory text that they aren’t going to read anyway – avoid dialogue box fatigue. If you can, provide a link to more information, so that the user can choose to read up on why the system has been implemented
  • Don’t assume the user is doing the wrong thing. Taking a default hostile stance via the language or wording used in the messages won’t foster a sense of trust. Yes, insider threats are often the main cause of security breaches, but IT can do its part in building team trust

I believe these to be reasonable principles to consider, but of course, some environments may have specific requirements.

Microsoft has published user interface guidelines for Windows for many years, with what I would call “mixed results” from the developer community. While good design isn’t easy, Microsoft has guidelines on FontsStyle and Tone, and User Interface Principles that are applicable to the Application Control administrator.

Looking for Inspiration

Microsoft has specific message boxes in User Account Control that I’ve used as the basis for improving the messages boxes from Application Control; both visually and in the language/text. Here’s a typical UAC message box on Windows 10 – it provides some immediate visual feedback with colour and simple language for the user to act upon:

Windows User Account Control message box

UAC (and SmartScreen) displays various message boxes depending on the action taken that have different colours to better provide the user with an immediate visual feedback. 

From top to bottom: blocked app, app with unknown publisher, app with a known/trusted publisher

Sticking with an established visual style, we can use these colours in our Application Control message boxes. I haven’t found documentation on the colours from Microsoft, so the hex values below might not be 100% accurate.

Blue (#85b8e8 ) background is from the message box used to identify Windows components or software that is signed and trusted Yellow (#f8d470) background is from the message box that identifies components or applications that are untrusted or not signed Red (#8e000b) background denotes an application that has been blocked by Windows SmartScreen I’ve used a softer red (#bf3235) background from the Ivanti Application Control console instead of UAC

In addition to the visual style, we can use these as examples of the language to use in our customised Application Control message boxes. 

Updating Ivanti Application Control Message Boxes

These message boxes are customisable via HTML and CSS, so we have the ability to exert a certain level of control on the look and feel. To enable the full level of customisation, you’ll need to be running Application Control 10.1 FR3, as the limit on the number of characters in some of the messages has been removed.

Here are the default Message Settings properties:

Ivanti Application Control message settings

Under that advanced button, is the CSS used to customise the visuals. So the first thing we’re going to do is customise that CSS to align the visuals with Windows 10. I am maintaining an updated CSS file to completely replace the default CSS on GitHub, which means that anyone can fork the file, improve it and contribute.

There are a few things that the CSS does and provides customisation for:

  1. Changes the default font to Segoe UI, the default Windows font (instead of Microsoft San Serif). The font used in the user input box in self-elevation message boxes is changed to Consolas instead of Courier New
  2. Hides the red and white X graphic. By default, this image is shown on all message boxes and doesn’t actually fit in with the intention of all messages boxes
  3. Enables a header in the 3 colours shown above
  4. Gives buttons a Windows 10 look
  5. Prevents scrollbars from showing inside the message boxes – because the messages can only be set to a fixed height and width, some scrolling occurs even in the default messages shown in the images at the beginning of this article

At the moment, this CSS isn’t perfect and requires updates to fix the cutting off text on the right-hand side of the dialog box, but I think it’s a huge improvement over what’s provided by default. 

Access Denied

Let’s look again at the default Access Denied message box. This doesn’t fit into the Windows UI, doesn’t necessarily tell the user what’s occurred or tell them whether any further action is required.

Ivanti Application Control default access denied dialog box

With our new CSS in place, we can modify the HTML behind this message to reflect what’s going on, as well as provide the user with a link to a page with more information. Note that because my CSS isn’t currently perfect, I’m cheating a bit by putting a carriage return after “Running this app might put”, so that the text isn’t cut off on the righ-hand side of the message box.

<div class="header red">An app has been prevented from running to protect this PC.</div> <div class="description">An unrecognised or unauthorised app was prevented from starting. Running this app might put your PC at risk. Blocked app: %ExecutableName% Location: %DirectoryName% Description: %AC_FileDescription% Publisher: %AC_CompanyName% Please view the <a href="https://servicedesk.stealthpuppy.com">Information Security Corner</a> for details on why this app was blocked. To install an app, you may need to raise a service request. </div>

Because we have a fixed height and width for the box, I’ve set the height to 690 pixels and the width to 440. Our new Access Denied message box now looks like this:

Ivanti Application Control access denied message box with improved styling

In this example, we are now providing the user with some immediate visual feedback, some reason as to why the application was blocked, some details on what was blocked and finally a link to more information (i.e. the action that the user can take). An external page can provide the user with a framework for understanding what’s going on and whether they should pick up the phone for the service desk (or not), with better detail and interaction than a message box could provide.

Self-Elevation

Now let’s do the same with the Self-Elevation action. Here’s the HTML:

<div class="header yellow">Do you want to allow this app to make changes to your device?</div> <div class="description">App name: %ExecutableName% <br/>This action will run this app with elevated privileges. Please provide the reason for taking this action. This information will be logged and audited. Improper use of elevated applications are in violation of the <a href="https://servicedesk.stealthpuppy.com">Acceptable Use Policy</a>.</div>

I’ve set the height to 770 pixels and the width to 460. Here’s the result:

Ivanti Application Control self-elevation message box with improved styling

In this example, we aren’t bombarding the end-user with text nor assuming what they’re doing is a hostile action. If you’re an IT Pro or a developer, there’s a good chance you’ll need to elevate an application several times during a single session, so this could be something you see multiple times a day.

System Controls

For a simple example, let’s update the System Controls message.

<div class="header blue">Uninstall of %ApplicationName% is not permitted.</div> <div class="description">Removal of this application has been denied to protect the integrity of this PC.</div>

Which then looks like this:

Ivanti Application Control system controls message box with improved styling

Here we’ve used blue to differentiate this from the previous two messages.

Be aware of High DPI Displays

Note that today Application Control doesn’t support high DPI displays or scaling above 100% very well. Because those dialog boxes are a fixed size and the contents don’t scale, you get something like this:

Ivanti Application Control Access Denied Dialog at 200% scaling

Ivanti is, of course, aware of the issue and I assume there’ll be a fix in a future update. Until then, at least on Windows 10, you can override the high DPI scaling behaviour. The Application Control Agent folder has a number of executables that run each of the messages. For example, to fix the scaling on the Access Denied message box, set compatibility of AMMessage.exe that the high DPI scaling behaviour is set to System (Enhanced).

Setting Application Control High DPI Scaling Compatibility

Once set, the message box will be set to its correct size and scaled up on high DPI displays, thus the box may look fuzzy depending on resolution and scaling. To avoid setting this on each executable individually on each end-point, use Group Policy or the Application Compatibility Toolkit to set these properties.

Conclusion

In this article, I’ve discussed how to improve the Ivanti Application Control message boxes for both visuals and text. With some effort, we’ve updated the style to better fit in with Windows 10, but these look right at home on Windows 7 as well. Additionally, the text has been improved to provide users with (hopefully) just the right amount of explanation, enabling them to take effective action if needed.

The custom CSS streamlines the visuals and better aligns the message boxes with UI guidelines from Microsoft. While I’ve made the CSS available on GitHub, it could do with some improvement. Opening this up to the community will enable feedback and updates.

This article by Aaron Parker, Improving Ivanti Application Control Message Boxes appeared first on Aaron Parker.

Categories: Community, Virtualisation

Your Guide to AWS re:Invent Announcements for 2017

Theresa Miller - Thu, 11/30/2017 - 06:30

AWS re:Invent 2017 is one of the most anticipated conferences of the year, since it usually falls towards the end of November. One thing that we all can count on is a slew of announcements during re:Invent 2017.  Luckily for us, AWS makes it easy to keep track of these re:Invent announcements. Let’s take a […]

The post Your Guide to AWS re:Invent Announcements for 2017 appeared first on 24x7ITConnection.

Microsoft Tech Summit – Sydney wrap-up

Theresa Miller - Tue, 11/21/2017 - 06:30

Microsoft kicked off their free, global Tech Summit event in Sydney on 16 & 17 November. This is their first attempt at this event format which is travelling to 13 countries around the world, with a US series also planned. So, what was different and did it work? Sessions, but not as you know them […]

The post Microsoft Tech Summit – Sydney wrap-up appeared first on 24x7ITConnection.

Pages

Subscribe to Spellings.net aggregator - Community