Support for NAS servers.
I know that I'm getting a little ahead of technology but I'm working out an idea of using VMotion or VMWare HA to provide server redundancy between two datacenters. Unfortunately, one of the datacenters is offsite and connected over several L3 links.Since my solution would require each VMWare server to access the same SAN space, I was thinking of using L2VPN extend a VLAN to the offsite data center and use FCoE to mirror the SAN space.Since I am not much of a SAN guy, I was hoping that some of the more knowledgeable people here would let me know if this is possible.
G'day,It all depends on what disk arrays you intend to use and what replication package.Most disk arrays are setup in a master/slave arrangement. If you want to use the second or dr copy of the array for writes something usually has to trigger the slave array to be promoted to the primary or the relationship needs to be broken.This is so you don't have a split brain (a term used in clustering).What you need is an active/active array/lun replication. I'm not sure who if any can do this at the moment.If you can tie in vmotion with some disk array replication software you are on the money!As far as fcoe goes, you could use it but you might be better off with FC-IP (fibre channel over ip).CheersAndrew
I think I read somewhere or perhaps our very knowledgable Cisco techo said that FCoE was for close systems and primarily for in the datacentre. So, if thats the case, you won't be using FCoE for any DR/replication solutions unless something changes in the future.Also, if FCoE is not good for distances larger than internal to the datacentre, then we still will need native FC over C/DWDM and/or FCIP.Am I right with the distance limitation?Stephen
Stephen, you're correct.10Gb requires CAT6 - 55m CAT6a - 100m Twinax - 10m (very low power consumption) Was told .1 Watts per portFCoE is only meant for the data center and since FCoE runs over Ethernet without TCP/IP, it can't be routed. It's meant to take a server with 2 NICs and 2 HBA's and combine them using a CNA (Combined Network Adapter) to allow for less cables, lower power consumption, and the ability to give carve up the 10Gb pipe as you see fit (i.e. 2Gb for network traffic and 8Gb for FC per card). With dual CNA's and with the coming of 40Gb and 100Gb Ethernet you can quickly see why this is gaining a lot of attention. I have some presentation documents on this but need to see if they are under a NDA. If not, I'll post here. Also, FCoE is still going through standards approval and expected to be finalized by 4Q08.
Gary,I am relatively excited about FCoE because of the continuing investment we have in servers and storage. If it can deliver, it will save us heaps of money someday. However, one thing that slightly puzzles me is how load balancing will be done with Ethernet. All my SAN equipment is in two fabrics (real or virtual depending on the instance) but I am not sure how FCoE works as far as providing two paths to everything. I heard a rumour that this is a sticking point which needs to be sorted out.I won't be racing into FCoE but I am sure we will have it in a year or two. It gives me food for thought with new infrastructure and I am waiting to see what HP and IBM does to provide a CNA for blades which we have by the thousands...Stephen