Monday 30 January 2012

Neocoretech – Reduce your VDI project cost and improve performance!

When I talk about the Neocoretech NDV VDI solution, the common question I’m asked by VMware View engineers is “How do I size the SAN?”

There is normally a confused look, when I suggest there is no need for a SAN!

We first of all have to understand that VMware View is built on the VMware ESX infrastructure, where the Storage Area Network (SAN) is a requirement for server virtualisation. 

For VMware View, the backbone of the solution is shared storage, where SANs are often used to provide this. The underlying VMware ESX infrastructure has to be configured to use logical unit number (LUN) addresses in order to give high availability.  This means they reserve a LUN on a SAN for 2 servers, so if one server goes down the other still points to the same disk. Using direct attached storage cannot work with this approach as Direct Attached Storage (DAS) by definition cannot be shared.

Neocoretech provides a different approach based on clusters, utilising direct attached storage. A dedicated Ethernet interface links two servers to that disk cluster and the disks are mirrored over the LAN. This also improves performance as there are more disks and therefore more spindles are available to deliver disk IOs.

In combination with utilising read-only desktops on the Neocoretech NDV solution, means that the VDI infrastructure is only delivering the desktop, which means dedicated network storage can (and should) be used to manage the data and storage.

With correct sizing and infrastructure, Neocoretech NDV will not only lower the TCO giving quicker ROI, but it also means there is no need to endure the SAN IOPS storm in the morning!

Thursday 26 January 2012

Is Two-Factor Authentication a commodity?

The complexity with passwords

We all know we need secure passwords, or at least keep them secret.  The problem is that we are asked to increase the complexity of passwords, either with the addition or inclusion of upper case characters, lower case characters, special characters or numbers.    Making the passwords more complex must increase security… or does it lead to users writing the passwords down or recycling the same passwords for a number of environments?

“Something you know, Something you are given, Something you are”

Obviously one of the downsides with passwords is that they can be passed from person to be person, but you lose the accountability of the actions from the user who has logged in.  This is where the requirement for multi-factor authentication arose, so there would be a number of elements to confirm the validity of the person and action.

Multi-factor authentication is said to be made up with two of the follow three elements.  “Something you know”, such as passwords and PINs, “Something you are given”, such as one time passwords, and “Something you are”, such as iris and fingerprint scans.

Some people will such that using multiple of the same type of authentication, such as the use of multiple passwords and PINs, would make it multi-factor.  I disagree, and would call that “Strong authentication” or I’ve heard of it referred to as “1.5 factor authentication”. 

Two-Factor Authentication requires a specialist?

In the past, there was a high level of complexity associated with two-factor authentication and should only be tackled by specialists within the field.  In the past, there were complicated multi-server implementations to build resiliency, administering more databases, managing a variety of tokens and that even before anything is deployed or secured!!

Hacked… June 2011!

Undoubtedly, most people reading this will be aware of a compromise that was reported in June 2011, where one of the world’s largest token vendor had (reportedly) 40 million tokens compromised.  Suddenly all that hard works seems to have been for nothing.  What did all the complexity bring, other than complexity for complexities sake?

Commoditised market?

There are a number of vendors offering two-factor authentication, but most organisations see it as a must have, rather than a want to have.  The barriers to entry were not only complexity, but security, administration time and in the current economic climate, cost.

Cloud or On-premise?
A cloud service will reduce the hardware cost, the running cost, power, energy, and all the other benefits associated with moving to a host solution.  There is always a concern about physical security, so ensure the provider meets the right criteria and standards.  There will be concerns around uptime, so ensure there is a good SLA in place.  With data security, ensure data is encrypted and not sent to the internet in clear text.

If these concerns are insurmountable, then look at an on-premise solution, but ensure the solution is highly available, if the access is business critical.  Ensure that the administrators looking after the solution can manage it correctly, or have the relevant support contracts to provide this.

It would be useful to have a choice of platforms, whether it is cloud or on-premise.

Ease of use?
In most IT environments we have to manage multiple systems, so we all want an easy to use system.

An intuitive, simple to use management console, with good help features, as well platform parity between the cloud and on-premise solution would be the way forward.

Token options?
Some providers will only offer hardware tokens, some will offer software tokens, some will offer tokens to run on mobile devices, some will offer SMS and/or email tokens, some will offer OATH tokens, and some will offer grid tokens.

What does your user base need?  What mix of tokens is required?  Will there be a company policy to define the type of tokens that will be offered?  What sort of mobile phones need to run tokens?

The preference would be to have all the token types available, but have them at an attractive price point.

Event or Time-based?
To simplify the way a one-time password is generated.  With time based, it take the time, encrypts it using a seed and an algorithm, to generate the one-time password.  With event base, it takes a pseudo-random value encrypts it using a seed and an algorithm, to generate the one-time password.

There are arguments for both solutions, with the time-based potentially going out of sync, or event-based where the password is valid until it is used.  More of a concern is the seed that are pre-populated onto the token, as if that were compromised; someone with it can potentially generate your one-time password!

Ideally, you want to ability to choose either time-based or event-based authentication, and have the ability to generate your own seeds, so even the two-factor authentication vendor would not know it.

Authentication Methods?
Most solutions support RADIUS; some will support Windows logon; some will support integration with OWA, SharePoint, IIS, Apache; some will support Citrix; and occassionally support SAML.
You don't want to be limited with what you can authenticate with, but want a solution that will support standards such as SAML, as this will be used more and more as cloud application usage increases.

Longevity?
With so many new start-ups and small organisations now around, and the largest two-factor authentication vendor being compromised, it is difficult to know who to trust!

We want a vendor with a good security history, but with the foresight to innovate, develop and implement solutions for the future.

Cryptocard
Offering a cost effective solution, with large variety of tokens, with the ability to choose either a cloud-based or on-premise platform, with an easy to use interface, the ability to have either time-based or event-based tokens, the ability to populate the tokens with your own seed, support a large number of applications and standards, from a company that has been around for over 21 years, makes Cryptocard the solution that should be considered first.

Wednesday 25 January 2012

Clickjacking and UAG

I got an email from a customer and friend, regarding penetration test results on a Microsoft UAG environment.  The report highlighted that Clickjacking is way of tricking web user into revealing confidential information or allowing their computer to be controlled while clicking on seemingly harmless web pages.  Clickjacking can be embedded code or a script that executes without the web users knowledge.

I took this opportunity to learn a bit more about this and found a couple of interesting websites.  It seems that other UAG users have encountered this during penetration testing before, and there is a fix:  http://forums.forefrontsecurity.org/default.aspx?g=posts&m=2788

The following code needs to be added into the UAG login.asp script:

<script type="text/javascript">
if(top != self) top.location.replace(location);
</script>

Adding the code is fine, but I had to find a way of testing the Clickjacking.  I found this site, which allowed me to test the vulnerability:  https://www.codemagi.com/blog/post/196

By creating an HTML page with the following code, and replacing the red text with the URL that you want to test, it will show if the website is vulnerable to Clickjacking:

<html>
<head>
<title>Clickjack test page</title>
</head>
<body>
<p>You’ve been clickjacked!</p>
<iframe sandbox="allow-scripts allow-forms" src="http://localhost:8080" style="width:100%;height:90%"></iframe>
</body>
</html>

With a little experimentation, I found the best place within the UAG login.asp to put the additional line of code was here:

var capsLockNote = "<%=GetString(111, "Note: The Caps Lock key is on. Passwords are case-sensitive.")%>";
</script>

<script type="text/javascript">
  if(top != self) top.location.replace(location);
</script>

<script language="JavaScript" src="/InternalSite/scripts/capsLock.js"></script>

I have to stress that it may be different on your UAG deployment, so remember to test it works, rather than assume!


Tuesday 24 January 2012

Why use VDI, when I can use Terminal Services?

Working with Neocoretech NDV VDI solution, I see many advantages for using it over VMWare View or Citrix XenDesktop, due to the way it utilises RAM and negates the need for SAN solutions by using “Read only” desktops.  A common question when discussing virtual desktop solutions, is why use VDI when I can use Terminal Services. 

These points were addressed by Christophe Rettien, CTO of Neocoretech.

Architecture
  • Neocoretech NDV is a 1:1 connection between 1 Thin Client and 1 Hosted PC (in this case a Virtual PC)
  • Terminal Services is a 1:n connection between n Thin Clients and 1 Hosted Server (could be a physical virtual Server)
Protocol
  • Neocoretech NDV is not tied to RDP and can use any available protocol including rich multimedia support, bi-directional sound and USB redirection. Available protocols are RDP, UXP, NX, RDP TCX...
  • Terminal Services IS RDP so no protocol choice here. If Terminal Services does not provide sufficient performance, nothing can solve that!
Supported OSs
  • Neocoretech NDV supports any x86 OS, which allows a user to run a Microsoft Windows XP, Windows 7 or Linux, with 32-bit or 64-bit distributions.
  • Terminal Services is only supported by Microsoft servers, so the end user can run a remote session on a Windows 2003 or 2008 Server. Some tweaks exist to create a Windows 7 look and feel from a Windows 2008 session.
Supported Software
  • Neocoretech NDV runs a single computer for each user, which means any application available on the OS used will be available to the user.
  • Terminal Services runs multiple shared instances of a unique program within the same server (2003 or 2008) and only applications allowed to be shared are available to the end user.
Summary

Neocoretech NDV provides:
  • Operating system choice
  • All applications are supported
  • Different applications installed according to user profile
  • All protocols supported
  • High availability options
NDV consequences:
  • Requires powerful servers
Terminal Services provides:
  • High density - good ratio server sizing/number of users
Terminal Services consequences:
  • CAL licensing cost per session and per application
  • Poor multimedia performance
  • Complex GPO settings, if different desktops are to be presented based on user profiles (all applications need to be installed)
Conclusion

As with all solutions, it is more important to understand the requirement, rather than push a technology.  We need to understand when one solution would fit better than the other.

Terminal Services works well in environments with budget constraints or require a “vanilla” suite of applications.  VDI solutions such as Neocoretech NDV offer greater flexibility, management, operating system choice, supports a greater number of clients, and can be easily managed.