Citrix

Who Said On-Premises Email Was Dead, Look Out Exchange Server 2019 is Here!

Theresa Miller - Tue, 09/18/2018 - 11:23

Well if you haven’t heard Exchange Server 2019 is now in public preview. During Microsoft Ignite 2017 it was announced that Exchange Server 2019 would be coming out in 2018. This announcement put away fears that Exchange Server 2016 would be the last on-premises version. Microsoft came through and released the public preview of Exchange […]

The post Who Said On-Premises Email Was Dead, Look Out Exchange Server 2019 is Here! appeared first on 24x7ITConnection.

New - Latest EPA Libraries

Netscaler Gateway downloads - Fri, 09/14/2018 - 18:30
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

Multi Cloud-Are we all talking about the same Multi Cloud?

Theresa Miller - Thu, 09/13/2018 - 05:30

The latest buzz word of the day is multi cloud and its usage with the enterprise. Lots of confusion and speculation but what does multi cloud really mean? Are we all talking about the same thing when we say Multi cloud? Because there are different cloud services offering types the meaning of multi cloud can […]

The post Multi Cloud-Are we all talking about the same Multi Cloud? appeared first on 24x7ITConnection.

Your VMworld US 2018 Recap, Announcements and Sessions

Theresa Miller - Tue, 09/11/2018 - 05:30

VMware took the stage once again in Las Vegas in August 2018 as another VMworld came and went which was loaded with announcements and content.  Lots of updates were shared for existing products as well as new products and even a brand new acquisition.  Not only were there lots of technical content and and update […]

The post Your VMworld US 2018 Recap, Announcements and Sessions appeared first on 24x7ITConnection.

New - NetScaler Gateway (Feature Phase) 12.1 Build 49.23

Netscaler Gateway downloads - Thu, 08/30/2018 - 21:00
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

New - Citrix Gateway (Feature Phase) 12.1 Build 49.23

Netscaler Gateway downloads - Thu, 08/30/2018 - 18:30
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

New - Components for NetScaler Gateway 12.1

Netscaler Gateway downloads - Thu, 08/30/2018 - 18:30
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

New - NetScaler Gateway (Feature Phase) Plug-ins and Clients for Build 12.1-49.23

Netscaler Gateway downloads - Thu, 08/30/2018 - 18:30
New downloads are available for Citrix Gateway
Categories: Citrix, Commercial, Downloads

Review of Additive Manufacture and Generative Design for PLM/Design at Develop 3D Live 2018

Rachel Berrys Virtually Visual blog - Wed, 05/16/2018 - 13:54

A couple of months ago, back at D3DLive! I had the pleasure of chairing the Additive Manufacturing (AM) track. This event in my opinion alongside a few others e.g. Siggraph and COFES is one of the key technology and futures events for the CAD/Graphics ecosystem. This event is also free thanks in part to major sponsors HP, Intel, AMD and Dell sponsorship.

A few years ago, at such events the 3D-printing offerings were interesting, quirky but not really mainstream manufacturing or CAD. There were 3D-printing vendors and a few niche consultancies, but it certainly wasn’t technology making keynotes or mentioned by the CAD/design software giants. This year saw the second session of the day on the keynote stage (video here) featuring a generative design demo from Bradley Rothenberg of nTopology.

With a full track dedicated to Additive Manufacture(AM) this year including the large mainstream CAD software vendors such as Dassault, Siemens PLM and Autodesk this technology really has hit the mainstream. The track was well attended with approximately half of the attendees when poled where actually involved in implementing additive manufacture and a significant proportion using it in production.

There was in general a significant overlap between many of the sessions, this technology has now become so mainstream that rather than seeing new concepts we are seeing like mainstream CAD more of an emphasis on specific product implementations and GUIs.

The morning session was kicked off by Sophie Jones, General Manager of Added Scientific a specialist consultancy with strong academic research links who investigate future technologies. This really was futures stuff rather than the mainstream covering 3D-printing of tailored pharmaceuticals and healthcare electronics.

Kieron Salter from KWSP then talked about some of their user case studies, as a specialist consultancy they’ve been needed by some customers to bridge the gaps in understanding. In particular, some of their work in the Motorsports sector was particularly interesting as cutting-edge novel automotive design.

Jesse Blankenship from Frustum gave a nice overview of their products and their integration into Solid Edge, Siemens NX and Onshape but he also showed the developer tools and GUIs that other CAD vendors and third-parties can use to integrate generative design technologies. In the world of CAD components, Frustum look well-placed to become a key component vendor.

Andy Roberts from Desktop Metal gave a rather beautiful demonstration walking through the generative design of a part, literally watching the iteration from a few constraints to an optimised part. This highlighted how different many of these parts can be compared to traditional techniques.

The afternoon’s schedule started with a bonus session that hadn’t made the printed schedule from Johannes Mann of Volume Graphics. It was a very insightful overview of the challenges in fidelity checking additive manufacturing and simulations on such parts (including some from Airbus).

Bradley Rothenberg of nTopology reappeared to elaborate on his keynote demo and covered some of the issues for quality control and simulation for generative design that CAM/CAE have solved for conventional manufacturing techniques.

Autodesk’s Andy Harris’ talk focused on how AM was enabling new genres of parts that simply aren’t feasible via other techniques. The complexity and quality of some of the resulting parts were impressive and often incredibly beautiful.

Dassault’s session was given by a last-minute speaker substitution of David Reid; I haven’t seen David talk before and he’s a great speaker. It was great to see a session led from the Simulia side of Dassault and how their AM technology integrates with their wider products. A case study on Airbus’ choice and usage of Simulia was particularly interesting as it covered how even the most safety critical, traditional big manufacturers are taking AM seriously and successfully integrating it into their complex PLM and regulatory frameworks.

The final session of the day was probably my personal favourite, Louise Geekie from Croft AM gave a brilliant talk on metal AM but what made it for me was her theme of understanding when you shouldn’t use AM and it’s limitations – basically just because you can… should you? This covered long term considerations on production volumes, compromises on material yield for surface quality, failure rates and costs of post-production finishing. Just because a part has been designed by engineering optimisation doesn’t mean an end user finds it aesthetically appealing – the case where a motorcycle manufacturer and indeed wants the front fork to “look” solid.

Overall my key takeaways were:

·       Just because you can doesn’t mean you should, choosing AM requires an understanding of the limitations and compromises and an overall plan if volume manufacture is an issue

·       The big CAD players are involved but there’s still work to be done to harden the surrounding frameworks in particular reliable simulation, search, fidelity testing.

·       How well the surrounding products and technologies handle the types of topologies and geometries GM throws out will be interesting. In particular it’ll be interesting to watch how Siemens Syncronous Technology and direct modellers cope, and the part search engines such as Siemens Geolus too.

·       Generative manufacture is computationally heavy and the quality of your CPU and GPU is worth thinking about.

Hardware OEMS and CPU/GPU Vendors taking CAD/PLM seriously

These new technologies are all hardware and computationally demanding compared to the modelling kernels of 20 years ago. AMD were showcasing and talking about all the pro-viz, rendering and cloud graphics technologies you’d expect but it was pleasing to see their product and solution teams and those from Dell, Intel, HP etc talking about computationally intensive technologies that benefit from GPU and CPU horse power such as CAE/FEA and of course generative design. It’s been noticeable in recent years in the increasing involvement and support from hardware OEMs and GPU vendors for end-user and ISV CAD/Design events and forums such as COFES, Siemens PLM Community and Dassault’s Community of Experts; which should hopefully bode well for future platform developments in hardware for CAD/Design.

Afterthoughts

A few weeks ago Al Dean from Develop3D wrote an article (bordering on a rant) about how poorly positioned a lot of the information around generative design (topology optimisation) and it’s link to additive manufacture is. I think many reading, simply thought – yes!

After reading it – I came to the conclusion that many think generative design and additive manufacture are inextricably linked. Whilst they can be used in conjunction there are vast numbers of use cases where the use of only one of the technologies is appropriate.

Generative design in my mind is computationally optimising a design to some physical constraints – it could be mass of material, or physical forces (stress/strain) and could include additional constraints – must have a connector like this in this area, must be this long or even must be tapered and constructed so it can be moulded (include appropriate tapers etc – so falls out the mold).

Additive manufacture is essentially 3-D printing, often metals. Adding material rather than the traditional machining mentality of CAD (Booleans often described as target and tool) – removing stuff from a block of metal by machining.

My feeling is generative design far greater potential for reducing costs and optimising parts for traditional manufacturing techniques e.g. 3/5-axis G-code like considerations, machining, injection molding than has been highlighted. Whilst AM as a prototyping workflow for those techniques is less mature than it could be as the focus has been on these weird and wonderful organic parts you couldn’t make before without AM/3-D Printing.

AWS and NICE DCV – a happy marriage! … resulting in a free protocol on AWS

Rachel Berrys Virtually Visual blog - Thu, 05/03/2018 - 13:12

It’s now two years since Amazon bought NICE and their DCV and EnginFrame products. NICE were very good at what they did. For a long time they were one of the few vendors who could offer a decent VDI solution that supported Linux VMs, with a history in HPC and Linux they truly understood virtualisation and compute as well as graphics. They’d also developed their own remoting protocol akin to Citrix’s ICA/HDX and it was one of the first to leverage GPUs for tasks like H.264 encode.

Because they did Linux VMs and neither Citrix nor VMware did, NICE were often a complementary partner rather than a competitor although with both Citrix and VMware adding Linux support that has shifted a little. AWS promised to leave NICE DCV products alone and have been true to that. However the fact Amazon now owns one of the best and experience protocol teams around has always raised the possibility they could do something a bit more interesting than most other clouds.

Just before Xmas in December 2017 without much fuss or publicity, Amazon announced that they’d throw NICE DVC in for free on AWS instances.

NICE DCV is a well-proven product with standalone customers and for many users offers an alternative to Citrix/VMware offerings; which raises the question why run VMware/Citrix on AWS if NICE will do?

There are also an awful lot of ISVs looking to offer cloud-based services and products including many with high graphical demands. To run these applications well in the cloud you need a decent protocol, some have developed their own which tend to be fairly basic H.264, others have bought in technology from the likes of Colorado Code Craft or Teradici’s standalone Cloud Access Software based around the PCoIP protocol. Throwing in a free protocol removes the need to license a third-party such as Teradici, which means the overall solution cost is cut but with no impact on the price AWS get for an instance. This could be a significant driver for ISVs and end-users to choose AWS above competitors.

Owning and controlling a protocol was a smart move on Amazon’s part, a key element of remoting and the performance of a cloud solution, it makes perfect sense to own one. Microsoft and hence Azure already have RDS/RDP under their control. Will we see moves from Google or Huawei in this area?

One niggle is that many users need not just a protocol but a broker, at the moment Teradici and many do not offer one themselves and users need to go to another third-party such as Leostream to get the functionality to spin-up and manage the VMs. Leostream have made a nice little niche supporting a wide range of protocols. It turns out that AWS are also offering a broker via the NICE EnginFrame technologies, this is however an additional paid for component but the single vendor offering may well appeal. It was really hard to find this out, I had to contact the AWS product managers for NICE to be certain. I really couldn’t work out what was available from the documentation and product overviews from AWS (in the end I had to contact the product management team directly).

Teradici do have a broker in-development, the details of which they discussed with Jack on brianmadden.com.

So, today there is the option of a free protocol and paid for broker (NICE+EngineFrame alibi tied to AWS) and soon there will be a paid protocol from Teradici with a broker thrown in, the protocol is already available on the AWS marketplace.

This is just one example of many where cloud providers can take functionality in-house and boost their appeal by cutting out VDI, broker or protocol vendors. For those niche protocol and broker vendors they will need to offer value through platform independence and any-ness (the ability to choose AWS, Azure, Google Cloud) against out of the box one-stop cloud giant offerings. Some will probably succeed but a few may well be squeezed. It may indeed push some to widen their offerings e.g. protocol vendors adding basic broker capabilities (as we are seeing with Teradici) or widening Linux support to match the strong NICE offering.

In particular broker vendor Leostream may be pushed, as other protocol vendors may well follow Teradici’s lead. However, analysts such as Gabe Knuth have reported for many years on Leostream’s ability to evolve and add value.

We’ve seen so many acquisitions in VDI/Cloud where a good small company gets consumed by a giant and eventually fails, the successful product dropped and the technologies never adopted by the mainstream business. AWS seem to have achieved the opposite with NICE, continuing to invest in a successful team and product whilst leeraging exactly what they do best. What a nice change! It’s also good to see a bit more innovation and competition in the protocol and broker space.

Open-sourced Virtualized GPU-sharing for KVM

Rachel Berrys Virtually Visual blog - Thu, 03/22/2018 - 12:05

About a month ago Jack Madden’s Friday EUC news-blast (worth signing-up for), highlighted a recent  announcement from AMD around open-sourcing their GPU drivers for hardware shared-GPU (MxGPU) on the open-source KVM hypervisor.

The actual announcement was made by Michael De Neffe on the AMD site, here.

KVM is an open source hypervisor, favoured by many in the Linux ecosystem and segments such as education. Some commercial hypervisors are built upon KVM adding certain features and commercial support such as Red Hat RHEL. Many large users including cloud giants such as Google, take the open source KVM and roll their own version.

There is a large open source KVM user base who are quite happy to self-support, including a large academic research community. Open-sourced drivers enable both vendors and others to innovate and develop specialist enhancements. KVM is also a very popular choice in the cloud OpenStack ecosystem.

As far as I know, this is the first open-sourced GPU sharing technology available to the open source KVM base. AMD’s hardware sales model also suits this community well with no software license of compulsory support; a model paralleling how CPUs/servers are purchased.

Shared GPU reduces the cost of providing graphics and suits the economies of scale and cost demanded in Cloud well. I imagine for the commercial and cloud based KVM hypervisors, ready access to drivers can only help accelerate and smooth their development on top of KVM.

The drivers are available to download here:

https://support.amd.com/en-us/download/workstation?os=KVM# . Currently there are only guest drivers for Windows OSs. However being open source, this opens up the possibility for a whole host of third-parties to develop variants for other platforms.

There is also an AMD community forum where you can ask more questions if this is a technology of interest to you and read the various stacks and applications other users are interested in.

Significant announcements for AR/VR for the CAD / AEC Industries

Rachel Berrys Virtually Visual blog - Fri, 03/09/2018 - 16:22
Why CAD should care about AR/VR?

VR (Virtual Reality) is all niche headsets and gaming? Or putting bunny ears on selfies… VR basically has a marketing problem. Looks cool but for many in enterprise it seems a niche technology to preview architectural buildings etc. In fact, the use cases are far wider if you get passed those big boxy headsets. AR (Augmented Reality) is essentially bits of VR on top of something see-through. There’s a nice overview video of the Microsoft Hololens from Leila Martine at Microsoft, including some good industrial case studies (towards the end of the video), here. Sublime have some really insightful examples too, such as a Crossrail project using AR for digital twin maintenance.

This week there have been some _really_ very significant announcements from two “gaming” engines, Unity and the Unreal Engine (UE) from Epic. The gaming engines themselves take data about models (which could be CAD/AEC models) together with lighting and material information and put it all together in a “game” which you can explore – or thinking of it another way they make a VR experience. Traditionally these technologies have been focused on gaming and film/media (VFX) industries. Whilst these games can be run with a VR headset, like true games they can be used on a big screen for collaborative views.

Getting CAD parts into gaming engines has been very fiddly:
  • The meshed formats in VFX industries are quite different from those generated in CAD.
  • Enterprise CAD/AEC user are also unfamiliar with the very complex VFX industry software used to generate lighting and materials.
  • CAD / AEC parts are frequently very large and with multiple design iterations so a large degree of automation is needed to fix them up repeatedly (or a lot of manual hard work)
  • Large engineering projects usually consist of thousands of CAD parts, in different formats from different suppliers

Many have focused on the Autodesk FBX ecosystem and 3DS Max, who with tools like their Slate materials editor allowed the materials/lighting information to be added to the CAD data.  This week both Unreal and Unity announced what amounts to end-to-end solutions for a CAD to VR pipeline.

Unreal Engine

Last year at Siggraph in July 2017, Epic announced Datasmith for 3DS Max with the inference of another 20 or so formats to follow (they were listed on the initial beta sign-up dropdown) including ESRI, Solidworks, Revit, Rhino, Catia, Autodesk, Siemens NX, Sketchup; the website today lists fewer but more explicitly, here. This basically promises the technology to get CAD data from multiple formats/sources into a form suitable for VFX.

This week they followed it up with the launch of a beta of Unreal Studio. Develop3D have a good overview of the announcement, here.  This reminds a lot of the slate editor in 3DS Max, and it looks sleek enough that your average CAD/AEC user could probably use without significant training (there are a lot of tutorial resources). With an advertised launch price of $49 per month it’s within the budget of your average small architectural firm and the per month billing makes it friendly to project based billing.

Epic are taking on a big task to deliver the end-to-end solution themselves, but they seem to know what they are doing. Watching their hiring website over the last six months they seem to have been hiring a large number of staff both in development (often in Canada) but also sales/business for these projects (hint: the roles often tagged with enterprise – so easy to spot). Over the last couple of years they’ve also built up a leadership team for these project including Marc Petit, Simon Jones and Christopher Murray and it’s worth reviewing the marketing material those folks are putting out.

Unity Announcement

On the same day as the UE announcement Unity countered with an announcement of providing a similar end-to-end solution via a partnership with PiXYZ, a small but specialist CAD toolkit provider.

Whilst the beta is not yet released, PiXYZ existing offerings look a very good and established technology match. Their website is remarkably high on detail of specific functionality and it looks good. PiXYZ Studio for example has all the mesh fix up tools you’d like for cleaning up CAD data for visualisation and VFX. PiXYZ Pipeline seems to cover all your import needs I’ve heard credible rumours that a lot of the CAD focused functionality is built on top of some of the most robust industry licensed toolkits, so the signs are positive that this will be a robust, mature solution rather fast. This partnership seems to place Unity in a position to match the Datasmith UE offering.

It’s less clear what Unity will provide on the materials / lighting front, but I imagine something like the Unreal Studio offering will be needed.

What did we learn from iRay and vRay in CAD

Regarding static rendering in VFX land: vRay, Renderman, Arnold and iRay compete, with iRay taking a fairly small share. However, via strong GPU, hardware and software vendor partnerships iRay has become the dominant choice in enterprise CAD (e.g. Solidworks Visualize etc). CAD loves to standardise and so it will be interesting if a similar battle of Unity vs Unreal will unfold with and eventual dominant force.

Licensing and vendor lock-in

This has all been enabled by the shift in licensing models of the gaming engines demonstrating they are serious about the enterprise space. For gaming a game manufacturer would pay a percentage such as 9% to use a gaming engine to create their game. This makes no sense in the enterprise space to integrate against a gaming engine which is a tiny additional feature on the overall CAD/PLM deployment. So, you will see lots of headlines about “Royalty Free” offerings, the revenues are in the products such as Datasmith and Studio. The degree to which both vendors rely on 3rd party toolkits and libraries under the hood e.g. CAD translators, the PiXYZ functionality etc will also dictate the profitability via how much Unreal or Unity have to pay in licensing costs.

These single vendor / ecosystem pipelines are attractive but relying on the gaming engine provider for the CAD import and materials could potentially lead to lock-in which always makes some customers nervous. Having done all the work of converting CAD data into something fit for rendering and VR I could see the attraction of being able to output it to iRay, Unity or Unreal, which of course is the opposite of what these products are.

Opportunities

There’s a large greenfield virgin market in CAD/AEC of customers who have very limited or no use of visualisation. Whilst the large AEC firms may have little pockets of specialist VFX, your average 10 man architecture firm doesn’t, like wise for the bulk of the Solidworks base. This technology looks simple enough for those users but I suspect uptake by SMBs may be slower than you might presume because for projects won on the lowest-bid why add a VR/AR/professional render component if Sketchup or similar is sufficient?

In enterprise CAD, AEC and GIS there are already VR users with bespoke solutions and strong specialist software offerings (often expensive) and it will be interesting to see the dynamics between these mass-market offerings and the established high-end vendors such as ESI.io or Optis.

These announcements are also setting Unity and Unreal up to start nibbling into the VFX, film and media ecosystems where specialist complex materials and lighting products are used. For many in AEC/CAD these products are a bit overkill. A lot of these users are likely to be less inclined to build their own materials and simply want libraries mapping the CAD materials (“this part is Steel”) to the VFX materials (“this is Steel and Steel should behave like this in response to light”). In the last month or so we’ve seen UE also move into traditional VFX territory with headlines such as “Visually Stunning Animated Feature ‘Allahyar and the Legend of Markhor’ is the First Produced Entirely in Unreal Engine” and Zafari – a new children’s cartoon TV series made using UE.

 

I haven’t seen any evidence of any integrations with the CAD materials ecosystems bridging that CAD materials (“this part is Steel”) to the VFX materials (“this is Steel and Steel should behave like this in response to light”) part of the solution. If this type of solution becomes mainstream it would be nice to see the material specialists (e.g. Granta Design) and CAD catalogues (e.g. Cadenas) carry information about how VFX type visualisation should be done based on the engineering material data. One to look out for.

 

Overall, I’m very interested about these announcements, lots of sound technology and use cases but whether the mass market is quite over the silly VR headset focus just yet…. we’ll soon find out J.

 

 

 

IoT Lifecycle attacks – lessons learned from Flash in VDI/Cloud

Rachel Berrys Virtually Visual blog - Wed, 08/23/2017 - 12:55
There are lots of parallels between cloud/vdi deployments and “the Internet of Things (IoT)”, basically they both involve connecting an end-point to a network.

One of the pain points in VDI for many years has been Flash Redirection. Flash is a product that it’s makers Adobe seem to have been effectively de-investing in for years. With redirection there is both server and client software. Adobe dropped development for Linux clients many years ago, then surprisingly resurrected it late last year (presumably after customer pressure). Adobe have since said they will kill the Flash player on all platforms in 2020.

Flash was plagued by security issues and compatibility issues (client versions that wouldn’t work with certain server versions). In a cloud/VDI environment the end-points and cloud/data center are often maintained by different teams or even companies. This is exactly the same challenge that the internet of things faces. A user’s smart lightbulb/washing machine is bought with a certain version of firmware, OEM software etc. and how it is maintained is a challenge.

It’s impossible for vendors to develop products that can predict the architecture of future security attacks and patches are frequent. Flash incompatibility often led to VDI users using registry hacks to disable the version matching between client and server software, simply to keep their applications working. When Linux Flash clients were discontinued, it left users unsupported as Adobe no longer developed the code and VDI vendors were unable to support closed source Adobe code.

The Flash Challenges for The Internet of Things
  • Customers need commitments from OEMs and software vendors for support matrices, how long a product will be updated/maintained.
  • IoT vendors need to implement version checking to protect end-clients/devices being downgraded to vulnerable versions of firm/software and life-cycle attacks.
  • In the same way that VDI can manage/patch end-points, vendors will need to implement ways to manage IoT end-points
  • What happens to a smart device if the vendor drops support / goes out of business. Is the consumer left with an expensive brick. Can it even be used safely?

There was a recent article in the Washington Post on Whirlpool’s lack of success with a connected washing machine, it comes with an app to allow you to “allocate laundry tasks to family members” and share “stain-removing tips with other users”. With the uptake low, it raises the question how long will OEMs maintain and services like applications. Many consumer devices such as washing machines are expect to last 5+ years. Again, this is a challenge VDI/Cloud has largely solved for thin-clients, devices with long 5-10 year refresh cycles.

Android Rooting and IoS Jailbreaking – lessons learned for IoT Security

Rachel Berrys Virtually Visual blog - Mon, 08/21/2017 - 11:38

Many security experts regard Android as the wild west of IT. An OS based on Linux developed by Google primarily for the mobile devices but now becoming key to many end points associated with IoT, Automotive, Televisions etc. With over 80% of smartphones running Android and most of the rest using Apple’s iOS, Android is well established and security is a big concern.

Imagine you are a big bank and you want 20000 employees to be able to access your secure network from their own phones (BYOD, Bring Your Own Device) or you want to offer your millions of customers your bank’s branded payment application on their own phone. How do you do it?


Android and iOS have very different security models and very different ways they can be circumvented. Apple with iOS have gone down the root of only allowing verified applications from the Apple store to be installed. If users want to install other applications they can compromise their devices by Jailbreaking their iPhone or similar. Jailbreaking can allow not only the end user to circumvent Apple controls in iOS but also malicious third-parties. IoS implements a locked bootloader to prevent modification of the OS itself or allowing applications root privileges.

Many people describe “rooting” on Android as equivalent to Jailbreaking. It isn’t. Android already allows users to add additional applications (via side-loading). Rooting of an Android devices can allow the OS itself to be modified. This can present a huge security risk as once the OS on which applications has potentially been compromised, an application running on it can’t really establish if the device is secure. Asking pure software on a device “hello, compromised device – are you compromised?” is simply a risky and silly question. Software alone theoretically can never guarantee to detect a device is secure.

There are pure software applications that pertain to establish if a device is compromised usually via techniques such as looking for common apps that can only be installed if a device is rooted/jailbroken, or characteristics left by rooting/jailbreaking applications, or signs of known malicious viruses/worms etc. These often present a rather falsely reassuring picture as they will detect the simplest and majority of compromises so it looks like such applications can detect a potentially unsecure device. However, for the most sophisticated of compromises where the OS itself is compromised the OS can supply such applications with the answer that the device is secure even if it isn’t. Being able to patch and upgrade the OS has a number of technical benefits, so some OEMs ship Android devices rooted and there is a huge ecosystem of rooting kits to enable it to be done. Rootkits can be very sinister and hide themselves though, lurking waiting to be exploited.

Knowing your OS is compromised is a comparable problem to that faced with hypervisors in virtualisation and one that can be solved by relying on hardware security where the hardware below the OS can detect if the OS is compromised. Technologies such as Intel TXT on servers takes a footprint of a hypervisor, locks it away in a hardware unit and compares the hypervisor to the reference on boot ongoing, if the hypervisor is meddled with the administrator is alerted.

Recognising the need for security for Android and other rich OSs, technologies have emerged from OEMs and chip designers that rely on hardware security. Usually these technologies include hardware support for trusted execution, trusted root and isolation with a stack of partners involved to ensure end applications can access the benefits of hardware security.

Typically, there is some isolation where both a trusted and untrusted processors and memory are provided, (some technologies allow the trusted and untrusted “worlds” to be on the same processor). The trusted world is where tested firmware can be kept and it remains a safe haven that knows what the stack above it including the OS should look like. Trusted execution environments (TEE) and Trusted Root are common in cloud and mobile and have enabled the wide-spread adoption of and confidence in mobile pay applications etc.

Many IoT products have been built upon chips designed for mobile phones, thin clients etc. and as such with Linux/Android OSs have the capabilities to support hardware supported security. However, many embedded devices were never designed to “be connected” with such security considerations. For the IoT (Internet of things) to succeed the embedded and OEM ecosystems need to look to hardware based security following the success of the datacentre and mobile in largely solving secure connection.

Of course, it all depends on the quality of execution. Enabling hardware security is a must for a secure platform however if a software stack is then added where a webcams default password is hardcoded the device can be compromised.

Effective Digital Content: Identifying your content top 10!

Rachel Berrys Virtually Visual blog - Mon, 08/14/2017 - 11:47
Make your top content work even harder!

This is a quick and dirty trick common in enterprise marketing and often used by pro-active Product Managers themselves. Most enterprise product marketing and product managers can get access to the google/Wordpress analytics for their products.

It is typical that a small % of the content on any website is attracting the most reads. I’ve recently done some analysis on my own blog site. In this article, I’ll use it as example to explain:

1)      How to analyse your view metrics to deduce your top content

2)      Tell you what trends you may see and what it may mean

3)      Provide a bit of background theory

There are plenty of tools out there to analyse content success that take time to learn and often are quite expensive and all this requires is a bit of excel. It’s something the lone blogger can also use. Keeping the tools simple also makes sure you are getting hands-on familiarity with your content data and the underlying methodologies those tools use.

Most website analytics should provide you with views/reads per page/blog. Personally, I’d advise looking at unique viewers, if you can, rather than page views (a few frequent users of a page can distort the data). I’d also advise filtering out or analysing separately, internal/intranet viewers, especially in a large company (quite often you’ll find your internal marketing team is the biggest consumer of their own marketing!).

WordPress, google analytics and similar should all provide you with some metrics on readership. It’s often not important as to whether the data has flaws, more that the method of counting views is the same for all the pages and has been consistently over the time the data was collected.

How to analyse your data

This may look a bit scary BUT get to grips with it and you’ll have some graphs and data to add to any marketing update. Once you’ve done it once you can produce a reasonable report in less than an hour and with a bit of practice 15 minutes.

1)      I took my blog site views from wordpress for this year in descending order and exported to .csv using the button in wordpress to do so. I then opened the file in excel. I then plotted the column of views. The blog title was in column A and the number of views in column B, starting at B1. Google analytics will allow you to extract similar.

2)      In cell C1 I then added “= B1”; and in cell B2; “= C1+B2”. This will give you cumulative views across the site incremented for each piece of content

3)      I then used the fill down feature and selected the cells from C2 downwards. In this case there were 108 pieces of content so filled down to cells C108 and D108.

4)      The in two spare cells below I entered (=C108*0.5) and = (C108* 0.8). These will give you the number of views that are 50% and 80% of views.

What are we looking for

·         Are your homepages/landing pages in the top 10%? These are the pieces of content from which you have the most control over user journeys around your site.

·         Which are your top 10% or even top 10 (actual number) pieces of content?

·         Which content attracts 50%, 80% of your viewers

Analysing your view data

Take the 50% and 80% view figures from step 5 above and review column C note the indices/rank of the content where column C is nearest to those numbers, in my case 50% and 80% of views were accounted for by my top 7 and top 24 pieces of content respectively.

From the data in column B I plotted the views for each piece of content (blog or webpage), I also changed the colour of the 7th and 24th piece of content on the graph to highlight these key numbers (in red).

 

This pattern is pretty typical of many websites and blogs. A small percentage, often less than 10% will account for 50% of more of your views. And 80% of your views will typically come from around 20% of your material (this is a manifestation of Parento analysis which in turn links to Kipf’s law…. more of that late), it’s amazing how well most content sites fit this pattern.

 

Make your top content work harder

So, a quick bit of excel and maths has left me with the knowledge of which 7 articles of 108 are attracting the most views. Since these are what people are _actually_ reading, the next steps are to check the quality of the experience and improve the user experience. I’ll cover some checklist and quick tricks to do this in future articles.

It’s also worth reviewing what you least successful content is and why. This is the stuff where you “may” have basically wasted your time! Common reasons include:

·         It’s not a topic of interest so a blog may not have been socialised because people didn’t think it was worth sharing!

·         It’s useful and important content but very niche and specific so low numbers of views are fine and to be expected.

·         You have put very good content on a poor vehicle e.g. on an area of a website hard to navigate to or that has been gated (requires a deterring login/email address to be supplied)

·         The content is very new relative to the time over which the data is fine. Everything may be ok you just need to analyse newer content over shorter more recent timeframes.

·         The content isn’t optimised for SEO or well-linked to from your other content.

In my own analysis, I was pleased to see that my home page is the 2nd ranking piece of content. Normally you’d hope and expect landing/home pages to be high up the list as the friendly entry points to your user journey. The article that came top was one that had been syndicated and socialised on reddit so I was comfortable with understanding it’s unusually high readership.

Key things to remember

·         The set of content you analysed is not independent of other content your company or competitors produce. You need to understand what % of your inbound is coming to your blog site say versus your support forums or knowledge base. You also need to understand whether the numbers coming to your site are good/bad versus the general market and competitors.

·         The time period over which you analyse data _really_ matters. Older well-read material scores higher on google. Very recent material has had less time to accumulate views. My blog is more like a website than a blog in that the % of recent new content is fairly low.

·         Marketing tags, if you are a keen user of tagged urls for different campaigns you may need to do some processing on your view data as multiple urls may map to a single piece of content.

·         If you are looking at a large site and/or one with a lot of legacy history, it’s not unusual to have 1000s of pages with very low views. Sometimes it’s better just to discard data for pages below say 10 views.

 

The theory

Many of the newer tools/applications are like black boxes, your average digital marketer uses them without knowledge of the algorithms. When websites were quite new this type of hands-on analysis was more common. Websites traffic statistics often obey Zipf’s law, a statistical pattern that shows up in language (this is also relevant to current Natural Language (NLP/NLU) work and AI). So, a quick theory/history lesson:

·         Back when “The Sun” newspaper website was fairly young (in 1997) some analysis was done that was widely noted. Jakob Nielsen did some work analysing the Zipf fit for “The Sun” website. Nielsen is one of the godfathers of user experience dating back to the 1980s and dawn of the internet (this guy was in Bell and IBM labs at the right time!); founder of the Nielsen Norman Group who still provide futurology and research to enterprise grade marketing.

·         Data Science Central have discussed web site statistics a few times including the Zipf effect, including some of the caveats of traffic analysis; some sites split content to boost page ratings and SEO/bots can throw in data anomalies.

Zipf’s law is widely found in language, connected ecosystems and networking. It’s used to explain City growth and the connected nature of the internet means it’s not too surprising it crops up. Other insightful reads:

·         Why Zipf’s law explains so many big data and physics phenomenons.

·         An old but very interesting read from HP on various areas of the Internet where Zipf’s law pops up.

·         A nice overview from digital strategists parse.ly: Zipf’s Law of the Internet: Explaining Online Behavior (their clients include The Washington Post and many other large media houses).

·         Do Websites Have Increasing Returns? More insight from Neilsen on implications of Zipf.

·         A nice blog from a real Digital Marketing Manager giving an overview on Zipf.

 

So, I also plotted vs rank both on log scales for my blog site. The shape of the graph pleasingly fits the theory (note the linear trendline overlaid in orange).

*Image(s) licensed by Ingram Image

 

XenApp 6.5…incoming!

Paul Lowther - Fri, 02/17/2012 - 23:05

Hey folks,

I know it’s been a while and I’m still getting visits to the site.  A lot of the information I posted here is still valid, so thanks for your continued visitations.

I’m just about to embark on getting XenApp 6.5 put into our environment, based on Windows 2008 R2 (of course).  Whereas I won’t be doing the direct engineering myself, I’ll be heading up the team doing it (stuff happens, people move on) but I’ll be able to bring you information as it comes in.

So, keep tuned in.

What’s more we’re looking to do a sizeable implementation of XenDesktop on XenServer too, so I’ll be sure to update you on some of that too.

If you have any requests, let me know – I’ll be sure to try to get the info!

PL


Categories: Citrix

Citrix Receiver and Juniper SSLVPN

Paul Lowther - Sat, 10/02/2010 - 18:25

What do you do if you have a requirement to have your Citrix Farm(s) available outside of the company firewall. ‘Available’ meaning usable on any device, become truly device agnostic!

You could punch some holes through your firewall and hope it meets the stringent company security regulations.

You could buy a Citrix Netscaler solution and use their in-built Access Gateway functionality to ‘easily’ allow ICA traffic into your network.

But…What if your company had already invested in SSLVPN technology and couldn’t justify Netscaler?

The answer, if you chose Juniper, which many companies do due to it’s standing in the technology space and magic quadrant position with Gartner and Forrester, is actually all rather simple.

On September 8th, Juniper released their new Junos Pulse app for iOS4.1 and above. This means that any device currently compatible with iOS4.1 can utilize an SSL connection through the Juniper devices, into a secure company network. Once the connection is established, you can fire up Citrix Receiver, put in your simple connection string for your farm and hey presto, access to your published applications and desktops on XenApp and XenDesktop.

OK, so we’re not device agnostic yet, but…

iOS4.2 is out in November, which will be release for the iPad, a big game changer for mobile computing due to it’s portability and screen real estate (self confessed fanboy!), which will mean Junos Pulse will work immediately, once installed and connected to your SSLVPN device.

For the non-Apple devices, I have it on good authority that Droid, Symbian, Windows Mobile and Blackberry are all in Beta development at the moment and will be released ‘soon’. Great news…and a step towards device agnostic usage, so long as there is a Citrix Receiver for your platform too.

Getting it to work:

Installing the app is as simple as any app from the App Store, configuring it is also pretty simple, what’s more, with the Apple iPhone Configuration Tool for OSX/Windows v3.1, you can create pre-configured connections for your device, which does the ‘hard’ work for your end users!

Configuring the Juniper SSL device is fairly simple too, as long as you are using the NetworkConnect, function your device will have access, albeit fairly pervasive, to the network you’re connecting to.

What do I recommend you do is:

Set up a separate realm for mobile devices, which you specify as the connection string
Create a new sign-in page that is friendly to small screens – check out the Juniper knowledge base for a sample download.
Limit the devices you want to have connect by specifying the client device identifier.
Limit the sign-in screen to be available to the *Junos* browser only.
Add black lists of network locations you don’t want everyone to have access to. These could be highly confidential data repositories or your ‘crown jewels’.
Add white lists of citrix servers you want your folks to have access to while on the network, or if you’re happy that the blacklist is sufficient, allow * for a more seamless and agile implementation which will not need adjustment as your farm grows.

There is a lot of flexibility in the solution and depending on your security needs you can mix and match some of these ideas and more in what constitutes a valid policy for your company. The more controls you add, the more you may need to revisit the configuration as devices arrive and requirements change.

Once you are up and running with NetworkConnect you can configure your Citrix Receiver client, connect and start using your Citrix apps strait away.

I was impressed how quick it was to achieve and painless the process has been made.

I don’t work for Juniper and have only recently become familiar with the technology but in my mind, Junos Pulse is a complete breath of fresh air. In forthcoming releases there will be host checkers and cache cleaners etc to ensure the device is adequately secure before allowing connection.

The area of mobile security is still in it’s infancy, it will be interesting to see if Juniper keeps up with the requirements for more security, or my hope is be the lead for others to follow!

PL


Categories: Citrix

Citrix Merchandising Server 1.2 on VMWare ESX (vSphere)

Paul Lowther - Sun, 03/21/2010 - 10:49

I recently acquired (yesterday) the Tech Preview version of Mechandising Server 1.2 from Citrix, which is specifically packaged for use on VMWare ESX.

Version 1.2 has been out or a short while, and whereas I had it running rather well on a XenServer, my company is a VMWare-only place right now, so getting this into a Production state would have meant jumping through several hoops.  I attempted to convert the Xen package over to VMWare but consistently got issues with the XML data in the OVF.

The new VMWare packaged file, which is around 450Mb, imported without a hitch!  Now I’m up and running on the platform of choice and this should make it easier for me to use in Production!  Good news!

Citrix recommends 2CPUs and 4Gb Ram for the instance.  Depending on your scale of usage, you can get it up and running with 1CPU and 1Gb RAM but that really does depend on how large your Directory data is.  For testing, I recommend 2Gb RAM, although it’s simple to adjust when you are more familiar with the load that is required for your environment.

If I find any gotchas with the configuration or getting Receiver/Plug-ins working with the Web Interface, I’ll let you know!

Thanks for reading, leave a comment!

PL


Categories: Citrix

AppSense 8.0 SP3 CCA Unattended

Paul Lowther - Fri, 03/19/2010 - 14:03

If you’re wanting an unattended installation of you AppSense CCA (Client Communications Agent) you will want to look here.

This is documented in the Admin Guide but I missed it on my first run-through.

The installation is the same for the 32-bit or 64-bit version, simply call the right MSI for your server type.  This is also true for the compatible Operating System versions, there’s only one per architecture but covers all compatible OS, which keeps it relatively simple.

Installation Script @echo off REM *** SETTING UP THE ENVIRONMENT NET USE M: "\\server\share\folder" /pers:no SET INSTALLDIR=M:\ REM **** Installing the AppSense Communications Agent (WatchDog agent installed also!) REM **** Set this VARIABLE for your own (primary) Management Server SET APPSENSESITE=SERVERNAME ECHO Installing AppSense Communications Agent.. cd /d %INSTALLDIR%\AppSenseCCA SET OPTIONS=INSTALLDIR="D:\Program Files\AppSense\Management Center\Communications Agent\" SET OPTIONS=%OPTIONS% WEB_SITE="http://%APPSENSESITE%:80/" SET OPTIONS=%OPTIONS% WATCHDOGAGENTDIR="D:\Program Files\AppSense\Management Center\Watchdog Agent\" SET OPTIONS=%OPTIONS% GROUP_NAME="ZeroPayload" SET OPTIONS=%OPTIONS% REBOOT=REALLYSUPPRESS /qb- /l*v c:\setup\log\cca.log START /WAIT MSIEXEC /i ClientCommunicationsAgent32.msi %OPTIONS%

This will install the CCA, set the installation folders, choose your “preferred” Management Server and then add it to a Deployment Group.

Management Console Considerations

One requirement for the Deployment Group is that it set for “Allow CCAs to self-register with this group”

This is set in the Management Console, in the group you have created, called ZeroPayload here, under the Settings section.  Putting a tick in the box is sufficient to complete the registration setting.

Now, a server will be able to join the group with the above unattended script.

What I have done, to manage how and when the agents and pacakages are deployed, is set the “Installation Schedule” to be set to “At Computer Startup – Agents are installed only when computers are started“.  I have added all the agents into this group but no PACKAGE payloads.  If you now reboot the server at your convenience, once the CCA is installed (in my case part of a wider XenApp install) the server will install the agents and immediately REBOOT the server one more time, since you need to remember that the Performance Manager agent will automatically issue a reboot request upon installation.

If you were to set this as “Immediate” in the Installation Schedule, there would be no control over when your server reboots.  Many people fall foul of that nuance of PM as it’s easy to forget (I’m sure the guys at AppsSense forget that on occasion too!).

One very cool behaviour is that you can add both 32-bit and 64-bit agents into this Deployment Group and your server will only install the version it needs for the given architecture.

So now your server is configured and ready for it’s final deployment.  If you’re like me and have  number of active Deployment Groups, some with a slightly different package payload, you can use this method initially, then move your server to the required deployment group.  If all agent versions are the same, and in the beginning they certainly should be, all that will be deployed when you move to another group is the Packages, and these don’t force a reboot.

One last thing to consider.  Any Environment Manager packages that have “Computer” settings will not be invoked until the next reboot.

So… there you have it in a nutshell.

Leave me a comment if you have experiences to share.

PL


Categories: Citrix

XenApp PowerShell Command Pack CTP3

Paul Lowther - Fri, 03/19/2010 - 09:09

I’ve recently started looking at PowerShell 2.o and bought the “for dummies” book to get me started.  My immediate need for usage of PowerShell was to automate some XenApp farm configurations.  This is where the XenApp Command Pack CTP3 comes into the picture.

Installation:

A pre-requisite, in addition to installing the following two components, is to install .Net Framework 3.5SP1 – this is specific to the XenApp Command Pack and use of CTP3 functionality.

NOTE: Anywhere a  is shown, this is not intended as line break merely a line continuation to overcome the shortcomings in WordPress!

ECHO+ ECHO Installing Windows Management Framework Core (including PowerShell 2.0).. start /wait WindowsServer2003-KB968930-x86-ENG.exe ♦ /quiet /log:c:\setup\log\WMF-PS.log /norestart ECHO Installing XenApp PowerShell Commands.. cd /d "%INSTALLDIR%\Citrix Presentation Server" start /wait msiexec /i Citrix.XenApp.Commands.Install_x86.msi ♦ INSTALLDIR="D:\Program Files\Citrix\XenApp Commands" ♦ /norestart /qb /l*v c:\setup\log\xa-cmds.log

Now I have the Commands installed, it’s relatively simple for me to manipulate the farm in any way I want! As far as I can see, anything that is configurable within the AMC (XenApp 5.0 FP2) can be manipulated with a PowerShell command. This includes both farm settings and server settings. I’ve also been able to set Server Groups, Server Console published icons, Administrator Access, Lesser-mortal-being Access (defined access rights) and more besides.

I would have added some of my code here but there are some sensitive items in it and would have to rewrite a lot just to display it.  It’s quite simple to get some quick results, believe me!

It’s a given that Citrix will increase their use of PowerShell in versions to come, such as FP3 and XenApp 6 for W2K8-R2. This for me can only be seen as a positive move!

I can’t recommend this one highly enough.  Check it out.

Leave a comment and thanks for reading.

PL


Categories: Citrix

Pages

Subscribe to Spellings.net aggregator - Citrix