Wednesday 25 September 2013

BES 10.1 Part 2


So after spending many hours banging my head against a brick wall in trying to get the push notification working I eventually got it working. I found that the password that the push notification uses for the account that is associated with the accessing push notification cannot have any capital letters in it.

I have also read posts that certain special characters have problems so for example we found that password.1  worked well. (clearly a more secure version of this would be needed).  I then found a separate problem that exchange was not routing the responses to the push notification subscription to the BES server and was being routed via a proxy server.  Once this corrected the push notification worked.

 We have had a couple of weeks playing around with this and still have a number of concerns. The biggest concern is the amount of time it takes to get into the email where it has to connect to the secure workspace.  I found a number of issues specifically relating where the phone may have connected to a wireless but not have an internet connection such as Virgin tube wirelesses or BT captive portal type wirelesses.

 This seems to cause the application to crash or have it continually  "continuing to workspace". I have also found the app to crash  even despite having deleted it and reinstalled it.  Finally I have also found the app to be sluggish in that you will be writing an email and it will hang for a few second.  We will be shortly testing it with an iPhone 5 and while I have it on my iPad I have not used it in anger in the same way I have used it on the iPhone.  Currently I still feel that the BlackBerrys that use a MAPI connection are superior to the active synch and I also found the application to eat battery life from the phone.

Blackbery BES 10.1


Blackberry Universal Device Control

 

Recently Blackberry have released their secure workspace product which will allow the putting of emails into a sandbox application that can be installed on android and IOS devices.  This is integrated as part of the BES 10.1 Server

For people who have only ever run the BES 5 Servers you will be aware that the Blackberry Servers communicate with exchange through MAPI.  As of the  BES 10 both the connection for the non-Blackberry devices and the Blackberry 10 devices are all carried out directly through active sync.  Blackberry have stated that they are moving away from the MAPI connection and this in itself poses some interesting challenges.

With the current BES 5 Servers if we have any issues relating to emails being populated twice or emails not syncing then  Blackberry Support are responsible for identifying the issue, with active sync  Blackberry support have stated to us that they will ask us to refer problems in synchronisation to Microsoft.  For SMB firms that do not have a support contract with Microsoft they will be liable for additional costs on top of the Blackberry Support if such issue arise.

With the active sync technology users are required to present their network password on the end device, this is a significant change from the standard BES 5 in which the user’s password are not required.  In many organisations that follow Microsoft’s best practices passwords are changed every thirty days and this is an additional inconvenience and seems a step backwards for our users.  This problem can be overcome by using the SCEP technology but this requires additional configuration and again is something that would have to be supported in house and not part of the Blackberry infrastructure.

Blackberry heavily relies on certificate technology to carry out the authentication between the non-Blackberry device and its Blackberry infrastructure; and while it can be argued what Blackberry are providing could easily be provided with your own internal VPN infrastructure the added complication of certificates is handled very nicely through the Blackberry product and takes a significant learning curve away from smaller IT departments.

That said one of the biggest drawbacks on using IOS is that the Apple does not allow external parties to connect directly to the iPhones and therefore pushing out emails as they come in is not an option.  In order to allow push notification your internal Blackberry server will have to notify the Apple Notification Service (APN) which in turn will notify your device and your device will then request the latest emails from Blackberry.  We have had significant problems in getting this working (see part 2)

 We have been working with the Blackberry Team for a number of days and it has been escalated but from reading around on other blogs it appears that we are not the only people having this working.  It should also be understood that as of IOS 6 still do not allow multi-thread applications.

 During some real world testing we have seen a number of problems in using the secure workspace on the IPhone when you are moving in and out of networks.  For instance using the secure workspace on the underground as the iPhone was connecting to different wireless networks the connecting to secure workspace box kept flashing on and off making it impossible to compose or delete emails offline.  Furthermore we found when the iPhone was connecting to wireless networks but did not have internet connection on those wireless networks such as Virgin Media it was causing the App to crash. 

 So initial thoughts on the Blackberry Workspaces the secure workspace is clearly a Mark 1 product and will take a good twelve months to mature.  It is arguable whether die hard Blackberry users that have been used to the rock solid reliability and ease of use will be able to suffer the imperfections of using the software on a non-Blackberry device.  Where I see this being most useful is the occasional user that checks their emails once a day in the evening from their iPad then the solution would be more than adequate.  We still have a number of problems to iron out such as push notification on Apple and will blog back later when we have resolved these problems.

Friday 12 April 2013

Reinstall Systems Centre Operations Manager 2012 SP1 Issues (SCOM)


I recently had to reinstall Systems Centre Operations Manager 2012, the initial install was done on the candidate release had no problems however on removing and reinstalling from the SP1 one release I thought I would share with you a couple of issues that arose. 
 
 Firstly the databases that are selected need to have a trading slash otherwise the install will fail as show below
 
 
 
 
 Secondly I continue to have a problem where the date warehouse would not install properly on examining the logs there was an issue with the data access layer.  I found the solution to this was to change the data access account from a local user account to a domain user account.  After making these changes the install completed successfully. 

 
 
 [22:10:45]: Info: :Info:trying to connect with server xxxxx
[22:10:52]: Info: :Info:Error while connecting to management server: The Data Access service is either not running or not yet initialized. Check the event log for more information.
[22:10:52]: Error: :Couldn't connect to mgt server stack: : Threw Exception.Type: Microsoft.EnterpriseManagement.Common.ServiceNotRunningException, Exception Error Code: 0x80131500, Exception.Message: The Data Access service is either not running or not yet initialized. Check the event log for more information.
[22:10:52]: Error: :StackTrace:   at Microsoft.EnterpriseManagement.Common.Internal.ExceptionHandlers.HandleChannelExceptions(Exception ex)
   at Microsoft.EnterpriseManagement.Common.Internal.SdkDataLayerProxyCore.CreateEndpoint[T](EnterpriseManagementConnectionSettings connectionSettings, SdkChannelObject`1 channelObjectDispatcherService)
 
 
 

Reinstalling SQL Reporting Services for Systems Centre Operations Manager 2012 (SCOM)


I recently had to reinstall the reporting component of Systems Centre Operations Manager. 

 
On uninstalling the reporting component I subsequently was unable to connect to the SQL instance of the reporting services that we were using for Systems Centre Operations Manager.  On examining the event log I came across the error message of

"Report Server (MSSQLSERVER_ cannon load the Windows extension"




  After some research I found the solution was the run the Resetsrs that can be found on the Systems Centre Operations Manager CD under SupportTools\AMD64\ResetSRS.exe

.  This resolved the reporting service instance problem and was able to re-install the reporting services module.

Friday 5 April 2013

Veeam Replication


VEEAM is a host-based backup solution requires snapshots to be taken at the guest level. This has a number of challenges as they should not be taken on high I/O servers and are also not supported by Microsoft. On the positive side of VEEAM it is a very good technology for taking multiple incremental backups. VEEAM works via a technology called reverse backup whereas traditional incremental backups using software such as Backup Exec we would have to restore a master backup and then create smaller incremental afterwards, VEEAM works by creating a master file when it then takes the next backup the master file is updated and the changes are pushed down to a smaller incremental file, when restoring a backup from VEEAM you are able to store the single backup file and not need to restore intermediate incremental backup, but in addition this allows you to roll back to previous versions and VEEAM will then apply the incremental to get a backup from whatever point in time you require.

The VEEAM technology uses change block tracking when used with VMware means that the underlying VMware hypervisor tracks the changes made vastly speeding up the time to take a
Incremental backup. VEEAM working at the hypervisor level also means that the load put on the host is uniform and does not impact any performance so if it wasn’t for the snapshot problem there would be no issue in running backups during the day. VEEAM is a byte level backup technology as opposed to a bit level, the amount of data that is replicated is more than the Doubletake solution. VEEAM has the ability to QueueSQ the underlying guest operating system before it takes a snapshot allowing it to make use of the built in shadow copy functionality of Windows to QS the exchange and SQL databases before the snapshot is taken.


As a backup technology VEEAM is able to save individual files to a hard disk and then this can either be copied onto a removable drive or transferred to tape by using a product such as Backup Exec. The disadvantage of doing this is that you are putting all your reliance into a single flat file. If that file becomes corrupt either through copying to tape or at the point of backup you will not know this and will not be able to restore any part of the VM. Therefore to be confident that these files are not corrupted we need to find a way of being able to mount them and test them in a lab environment to confirm a bootable backup.

VEEAM has a number of different ways of restoring data from a flat file backup in the event of recovery. VEEAM has a technology called instant restore that allows you to mount the backup in an emulated environment. While this allows for instant access to the VM it puts a significant load on the server as it is having to emulate the VMDKs.

VEEAM has a traditional restore that allows you to restore the flat file backup into the original VMDKs. This will take a significant amount of time depending on the size of the VMDK.

VEEAM also allows us single file restore. In a single file restore VEEAM will mount the VMDK behind the scenes and give an explorer style access to the available drives. This is a good technique for making sure that the flat file backup is not corrupt. As VEEAM takes multiple numbers of snapshots the further back you go in snapshots the more increments are need to be combined to produce the file.

VEEAM Enterprise has the ability for application level restores. This allows for restoring individual databases, individual exchange items and individual active directory objects. This product is great for less intense I/O Servers

 

 

Vmware Replication and Backups reviewed


After completing a project to virtualise our infrastructure I have recently undertaken a project to replicate the data from the main site to a data centre.

As part of this project I reviewed a number of different products and techniques and have blogged the following reviews on a products called Double Take and Veeam.

One of the challenges faced in replicating a virtualised environment is the trade off in using an OS in agent style replication against using virtual aware replication techniques. Whilst the virtual aware techniques generally utilise less overhead on the infrastructure they generally require the snapshotting of the VMware vmdks.

 
Microsoft does not support its products where snapshotting is used, in addition to this there is inherent problems with creating snapshots on servers that have high I/O.  In contrast in-agent replication may not have these problems but it does requires significantly more resources in the terms of CPU and memory which can be multiplied if this agent is on many guest operating systems on a host.  As part of this project I came up with a high bridge solution which utilises both technologies to harness the advantages of both  specific types of servers namely high I/O servers such as SQL and Exchange but inherently do not like snapshots to use in agent style replication and where servers had little I/O and little data change such as IIS servers or service type servers we are utilising VMware snapshotting technology. 
 
Please find the reviews here:
 
 
 
 

Doubletake Replication Review


Doubletake has two versions, one that works at the guest operating system level and one that works at the VM host level.  The VM host level version is a new product and something that Doubletake have recently ventured into, the operating system level version of Doubletake is a mature product and sits within the host guest operating system and copies every single file that is written to disk, this is  placed into a queue that is then replicated to a different machine. 




Traditionally this product was used on physical servers where there was a one-to-one ratio of the production server to DR server that would be put in the data centre.  In recent times they have adapted this product to allow for either a physical server or virtual server (it makes no difference as this is an OS aware back-up strategy) to then replicate to a single guest called a virtual recovery assistant that is located on a different host VM in a data centre.  This has a significant advantage as it requires less licensing costs. 
Traditionally, Doubletake would require a licence for both the production server  and the DR server.  With VRA a  licence is required for each source server but only one licences is required for the VRA and if your source servers are virtual then you can buy packs of 5 license. 
 The  virtual recovery assistant works by attaching multiple virtual hard discs for each source server that is being backed up.   if we have two source servers, either  physical or virtual, and each has two hard discs, the virtual recovery assistant will have one operating system, but will have all four hard disks attached to it.  As files are changed on the source server they are placed into a queue on the source server, that then get replicated to the virtual recovery assistant which knows which source server they came from and applies them in the correct order to the associated hard discs.  In the event that we require bringing up a failed server, the virtual recovery assistant will detach the hard discs from the VRA and attach them to the newly created virtual server. 
One of the great advantages about using this technology, is that we are able to actually see the physical files that have been replicated on the VRA as these are mount points to yhr VHD, the disadvantage with this is that in a disaster, we need to have the virtual recovery assistant fully functional as we need to run the proper Doubletake scripts to detach the mount points from the VRA, build the appropriate server and attach the hard discs.  This should not be done as a manual task as this will generally cause the virtual machine to fail. 
One of the great advantages about this product is that it does not require the host (source) to require snapshots.  As the agent works within the guest operating system, it takes advantage of the natural fault tolerance that is designed within many applications.  Furthermore, if we had decided to use snapshotting technologies, we would not have been able to use these during the day as it is not advisable to snapshot on high I/O loads and we would not be utilising the bandwidth we have available during the day and placing all the load on that bandwidth at night.  By being able to replicate to a DR site during the day, this spreads the load across our line over a twenty-four hours period and makes best usage of the available bandwidth without us having to increase it for night usage.  One of the disadvantages of this technology is it is not designed as a back-up technology, purely as a replication. In newer releases, it is expected that there will be a snapshotting technology that can be used on the target server and we will be able to take a minimal number of snapshot.  The disadvantage of this technology is it is  expensive compared to Veeam but it may not be necessary to use it on all production servers.  On servers that have very little change, we would be able to use a different technology  to replicate these in the evening.

The biggest disadvantage with Doubletake, as it is working at the guest level, in the event that a guest server is rebooted, the Doubletake agent is stopped and at this point loses track of any changes that are being made to the server during the reboot.  Because of this, when the server is brought back up, a re-mirror is required.  A re-mirror does not necessarily mean the whole data has to be retransmitted to the replication site as a check sum is taken against each individual file, but if a servers that have a significant amount of data, this re-mirror can take a significant amount of time, sometimes up to a week if you have a lot of small files.  This is not necessarily a problem because while the remirror is taking place, the data is still being replicated. 

http://www.visionsolutions.com/products/vision-products-overview.aspx