Wednesday, April 30, 2014

Clearing bash history

~/.bash_history holds the history.
To clear the bash history completely on the server. You can open terminal and type cat /dev/null > ~/.bash_history
Other alternative way is to link ~/.bash_history to /dev/null
On my Ubuntu 12.10 box, The history comes back when I login back. I guess because the history entries has a copy in the memory and it will flush back to the file when you log out. The following command worked for me.
cat /dev/null > ~/.bash_history && history -c && exit

OVFTOOL

OVFTool and how to use it.

http://michael.lustfield.net/misc/completely-automated-esxi-deployment

How to Use VMware's OVF Converter Tool:
VMware's OVF Tool (and OVF Tool User Guide) can be downloaded for free from the following location:
http://www.vmware.com/support/developer/ovf/
Install this utility on your local machine where your VM images reside.
In the following example, VMware OVF Tool 2.1 for Windows 64 bit was downloaded and installed on a Windows 7 64-bit machine to the following location:
c:\Program Files\VMware\VMware OVF Tool\
Before running OVF Tool, we would recommend ensuring that your image resides by itself in its own directory, for example:
e:\myvirtualmachines\myovf\myimage.ovf
We would then recommend creating a new (separate) directory for your converted VMX image, for example:
e:\myvirtualmachines\myvmx\
Once OVF Tool is installed and your conversion directory has been created, open a command prompt. This will be different depending on which OS your local machine is running; in Windows 7, navigate to "Start", type "cmd" in the search bar, and select "cmd.exe" when it appears in the search results.
Navigate to where OVF Tool is installed by using a command such as:
cd "c:\Program Files\VMware\VMware OVF Tool"
Run OVF Tool by using the following (or similar) command. Note that the first section of the command calls out the name and location of your current OVF file, and the second section of the command declares where the new VMX file should be placed (and what it should be called):
ovftool e:\myvirtualmachines\myovf\myimage.ovf e:\myvirtualmachines\myvmx\myimage.vmx

VSFTPd config

https://help.ubuntu.com/10.04/serverguide/ftp-server.html

FTP Server

File Transfer Protocol (FTP) is a TCP protocol for uploading and downloading files between computers. FTP works on a client/server model. The server component is called an FTP daemon. It continuously listens for FTP requests from remote clients. When a request is received, it manages the login and sets up the connection. For the duration of the session it executes any of commands sent by the FTP client.
Access to an FTP server can be managed in two ways:
  • Anonymous
  • Authenticated
In the Anonymous mode, remote clients can access the FTP server by using the default user account called "anonymous" or "ftp" and sending an email address as the password. In the Authenticated mode a user must have an account and a password. User access to the FTP server directories and files is dependent on the permissions defined for the account used at login. As a general rule, the FTP daemon will hide the root directory of the FTP server and change it to the FTP Home directory. This hides the rest of the file system from remote sessions.

vsftpd - FTP Server Installation

vsftpd is an FTP daemon available in Ubuntu. It is easy to install, set up, and maintain. To install vsftpd you can run the following command:
sudo apt-get install vsftpd

Anonymous FTP Configuration

By default vsftpd is configured to only allow anonymous download. During installation a ftp user is created with a home directory of /home/ftp. This is the default FTP directory.
If you wish to change this location, to /srv/ftp for example, simply create a directory in another location and change the ftp user's home directory:
sudo mkdir /srv/ftp
sudo usermod -d /srv/ftp ftp 
After making the change restart vsftpd:
sudo /etc/init.d/vsftpd restart
Finally, copy any files and directories you would like to make available through anonymous FTP to /srv/ftp.

User Authenticated FTP Configuration

To configure vsftpd to authenticate system users and allow them to upload files edit /etc/vsftpd.conf:
local_enable=YES
write_enable=YES
Now restart vsftpd:
sudo /etc/init.d/vsftpd restart
Now when system users login to FTP they will start in their home directories where they can download, upload, create directories, etc.
Similarly, by default, the anonymous users are not allowed to upload files to FTP server. To change this setting, you should uncomment the following line, and restart vsftpd:
anon_upload_enable=YES
[Warning]
Enabling anonymous FTP upload can be an extreme security risk. It is best to not enable anonymous upload on servers accessed directly from the Internet.
The configuration file consists of many configuration parameters. The information about each parameter is available in the configuration file. Alternatively, you can refer to the man page, man 5 vsftpd.conf for details of each parameter.

Securing FTP

There are options in /etc/vsftpd.conf to help make vsftpd more secure. For example users can be limited to their home directories by uncommenting:
chroot_local_user=YES
You can also limit a specific list of users to just their home directories:
chroot_list_enable=YES
chroot_list_file=/etc/vsftpd.chroot_list
After uncommenting the above options, create a /etc/vsftpd.chroot_list containing a list of users one per line. Then restart vsftpd:
sudo /etc/init.d/vsftpd restart
Also, the /etc/ftpusers file is a list of users that are disallowed FTP access. The default list includes root, daemon, nobody, etc. To disable FTP access for additional users simply add them to the list.
FTP can also be encrypted using FTPS. Different from SFTP, FTPS is FTP over Secure Socket Layer (SSL). SFTP is a FTP like session over an encrypted SSH connection. A major difference is that users of SFTP need to have a shell account on the system, instead of a nologin shell. Providing all users with a shell may not be ideal for some environments, such as a shared web host.
To configure FTPS, edit /etc/vsftpd.conf and at the bottom add:
ssl_enable=Yes
Also, notice the certificate and key related options:
rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
By default these options are set the certificate and key provided by the ssl-cert package. In a production environment these should be replaced with a certificate and key generated for the specific host. For more information on certificates see the section called “Certificates”.
Now restart vsftpd, and non-anonymous users will be forced to use FTPS:
sudo /etc/init.d/vsftpd restart
To allow users with a shell of /usr/sbin/nologin access to FTP, but have no shell access, edit /etc/shells adding the nologin shell:
# /etc/shells: valid login shells
/bin/csh
/bin/sh
/usr/bin/es
/usr/bin/ksh
/bin/ksh
/usr/bin/rc
/usr/bin/tcsh
/bin/tcsh
/usr/bin/esh
/bin/dash
/bin/bash
/bin/rbash
/usr/bin/screen
/usr/sbin/nologin
This is necessary because, by default vsftpd uses PAM for authentication, and the /etc/pam.d/vsftpd configuration file contains:
auth    required        pam_shells.so
The shells PAM module restricts access to shells listed in the /etc/shells file.
Most popular FTP clients can be configured connect using FTPS. The lftp command line FTP client has the ability to use FTPS as well.

References

Catbird Integrates with Cisco Application Centric Infrastructure (ACI)

New Partnership brings Policy Automation, Enforcement & Audit to Network Security and Compliance for Software - Defined - Networks (SDN)


The integration of the Cisco ACI architecture with Catbird delivers an asset-based approach for compliance automation and enforcement. Catbird organizes applications into shared policy groups, called TrustZones®. Catbird TrustZones policy is applied based upon published compliance standards and frameworks, continuously monitoring for configuration changes, gathering evidence of control for audit, and taking immediate enforcement actions in case of changes that may compromise security and compliance posture.
Cisco ACI enables Catbird insertion anywhere in the network fabric, providing centralized management, ensuring automated security and compliance policy and elastic scaling. With Cisco ACI and Catbird, policy compliance is now continuous, enforced in real-time and fully automated, with visibility and control that exceeds that which is possible in conventional physical environments. The combined solution, with Catbird supporting the ACI policy model and APIC controller, will provide active policy automation and enforcement of industry standards such as PCI DSS 3.0, ISO 27001, HIPAA, and FISMA, reducing the cost and complexity of compliance and increasing the flexibility and elasticity of the application network.

Key Features of Catbird 6.0:
  • Multi-hypervisor support* – Consistent security policy automation and compliance enforcement across Microsoft Hyper-VTM and VMware vSphere®.
  • Management API – Enterprises and service providers can now integrate security policy and compliance enforcement into their existing provisioning and management processes. Catbird 6.0 API includes ACL and alert operations, asset searching and enumeration, compliance state retrieval, event operations, and Catbird TrustZones® management and configuration.
  • Expanded role-based administrative functions* – Six roles including auditor, operator, firewall operator, and compliance officer, allowing customers to precisely align policy management with existing administration, security and compliance roles.
  • Enhanced continuous monitoring – SCAP configuration checking allows users to download security benchmarks from the National Institute of Standards and Technology (NIST) and the Center for Internet Security (CIS) and run configuration checks against those benchmarks. With SCAP, customers can continuously monitor their security posture based upon codified security benchmarks established by credible third parties and define configuration checks that are a requirement for organizations including federal government agencies.
  • Cisco & VMware virtual firewall integration – First security policy automation and enforcement solution to orchestrate two of the industry's leading virtual firewalls, Cisco Virtual Secure Gateway (VSG) and VMware vCloud® Networking and SecurityTM.  Customers can track and maintain the status of virtual firewalls and other security controls continuously, such as IDS/IPS, NAC, and virtual infrastructure monitoring, no matter where assets move on the network and what changes are made.
Catbird will be showcasing the new 6.0 release at RSA Conference 2014 in San Francisco, Feb. 24-28 2014, in booth 2505. The software is available immediately and a fully-functional evaluation version is available at www.catbird.com/demo.


Feb 2014
Cisco announced last week that its rapidly expanding ACI ecosystem now includes the A10 Networks aCloud Services Architecture based on the Thunder ADC Application Delivery Controllers, as well as the Catbird IDS/IPS virtual security solutions. These new ACI ecosystem vendors are announcing support for the ACI policy model and integration with the Application Infrastructure Policy Controller (APIC) which will accelerate and automate deployment and provisioning of these services into application networks. This should also resolve any speculation that the ACI ecosystem would not be including technology vendors that compete with Cisco’s other lines of business, as Cisco expands the solution alternatives for customers.
Each of the solutions will rely on two primary capabilities of the APIC and ACI to provide a policy-based automation framework and policy-based service insertion technology. A policy-based automation framework enables resources to be dynamically provisioned and configured according to application requirements. As a result, core services such as firewalls, application delivery controllers (ADC) and Layer 4 through 7 switches can be consumed by applications and made ready to use in a single automated step.
A policy-based service insertion solution automates the step of routing network traffic to the correct services based on application policies. The automated addition, removal, and reordering of services allows applications to quickly change the resources that they require without the need to rewire and reconfigure the network or relocate the services. For example, if the business decision is made to use a web application firewall found in a modern ADC as a cost-effective way of achieving PCI compliance, administrators would simply need to redefine the policy for the services that should be used for the related applications. The Cisco APIC can dynamically distribute new policies to the infrastructure and service nodes in minutes, without requiring the network be manually changed.


Integrating L4-7 Services in the Open ACI Architecture
APIC Services IntegrationSo, when technology vendors like these expressly commit to supporting the ACI architecture, what is the integration model to the APIC controller and the ACI fabric? First of all, service automation requires a vendor device package (see below), which is an XML structure defining the attributes, policies and capabilities of the supported L4-7 device. When APIC provisions new application networks that require these services, the device package is loaded into APIC, along with device-specific Python scripts. APIC then uses the device configuration model to pass appropriate configuration details to the device. Script handlers on the device are integrated through REST APIs on the device or CLI.


Tuesday, April 29, 2014

Cisco onePK

Cisco  developers tool kit onePK is an element within Cisco's Open Network Environment software-defined networking (SDN) strategy. onePK is an easy-to-use toolkit for development, automation, rapid service creation, and more. With its rich set of APIs, you can easily access the valuable data inside your network.

uild or extend applications from your routers and switches to servers and new business platforms. Automate current services or create new ones on demand, when and where you need them and faster than ever. onePK makes your network more powerful and flexible while giving you the control you need. Users also have access to an all-in-one development environment that includes simulated network elements.

SDK
https://developer.cisco.com/site/networking/one/onepk/getting-started/index.gsp

all-in-one-VM-1.2.0-173-cisco-onePK.ova
https://drive.google.com/file/d/0B8hUGU8trXU6dUhOU3ZLT001bXc/edit?usp=sharing

Compatibility Matrix


The table below shows minimum hardware and software requirements for Cisco onePK 1.2.
Element Software Version
Cisco Devices (onePK Release 1.2)
Cisco ASR 1000 and ISR 4400 Series Routers Cisco IOS XE 3.12.0S
Cisco ISR G2 Cisco IOS Release 15.4(2)T
Cisco ASR 9000 Series Routers Cisco IOS XR Release 5.2.0
Cisco Devices (onePK Release 1.1.1)
Cisco ASR 1000 and ISR 4400 Series Routers Cisco IOS XE 3.11.0S
Cisco ISR G2 Cisco IOS Release 15.4.(1)T
Nexus 3000 Series Switches Cisco NX-OS Release 6.0(2)U2(1)
Nexus 5000 and 6000 Series Switches Cisco NX-OS Release 6.0.(2)N3(1)
Cisco ASR 9000 Series Routers Cisco IOS XR Release 5.1.1
Development Workstation and Process Hosting
Workstation operating system Any POSIX Linux environment with a minimum kernel version of 2.6.x
GLibC 2.15
GNU Compiler Collection (GCC) Version 4.5.x
GNU Make 3.4.6+
OpenSSL 0.9.6d or later
Java 1.6.0_31
Eclipse Indigo Service Release 2 or later
Maven 3.0.3
Python 2.7.3 or later
All-in-One Virtual Machine
Virtualization software
  • VMWare ESXi 5 or later
  • VMWare Workstation 9 or later
  • VMWare Fusion 5 or later
  • Oracle VirtualBox 4.2 or later
Workstation hardware minimum requirements
  • 2 virtual CPUs
  • 4 GB RAM
  • 20 GB disk space

Designed for Flexibility 

onePK has the capability to:
  • Integrate with PyCharm, PyDev, Eclipse, IDLE, NetBeans, and more
  • Support commonly used languages, including C, Java, and Python
  • Run on any server or directly on your network elements
  • Use APIs to extend, modify, and tailor your network to your business needs
  • Tie in easily with third-party tools and workflows

Unlimited Possibilities

Use onePK for new application-enablement, service automation, and more. With onePK you can orchestrate and enhance your network elements. You can also:
  • Customize route logic
  • Create flow-based services such as quality of service (QoS)
  • Adapt applications for changing network conditions such as bandwidth
  • Automate workflows spanning multiple devices
  • Empower management applications with new information

Enterasys - Extreme Networks

Extreme Networks - Extremely Good Enterasys Deal?
http://seekingalpha.com/article/1693092-extreme-networks-extremely-good-enterasys-deal

Extreme Networks announced that it has reached an agreement to acquire Enterasys. Extreme will pay $180 million in cash for the company.
Enterasys focuses on wired and wireless network infrastructure, as well as security and management solutions. Enterasys has large clients including ING and Comcast, and following a two-year integration effort, operations will be merged with Extreme Networks.
The 933 employees of Enterasys will be added to Extreme Networks creating a company with almost 1,700 employees.
The acquisition brings strong wireless LAN, network management and security technologies. Combined with increased research and development efforts it will accelerate the future ambitions of the combined company. CEO Chuck Berger commented on the deal:

Our number one priority in combining these companies is to ensure an even more positive customer experience, preserving the value of their current investment, avoiding any disruption and delivering products and technologies that combine the best of both companies.
Enterasys generated trailing revenues of around $340 million over the past 12 months. As such the deal values the company at little over 0.5 times annual revenues.
Extreme Networks will finance the purchase with $105 million in cash and $75 million from a new credit line. The deal will be accretive to non-GAAP earnings in 2014, and will result in positive cash flows from operations

Aryaka

Software as a service WAN optimizer : Primarily used between multi-site business units. Targeting mid-size businesses.





Points of Presence (POP): Instead of relying on expensive, hardware-dependant LANs, the cloud startup instead goes for POPs. These are private stations, all over the planet, that are in a square mileage proximate to the end users. It acts as the reference point and provides the same work, though with better redundancy, as a private cloud communications infrastructure or a local network provider.

WAN: The wide area that this company covers is global, national or local. Because the infrastructure is courtesy of Aryaka, the clients need not to install any routing devices or rely on unreliable network connection from a telecommunications provider. Secondly, the startup offers remote management of the systems in place to ensure the right redundancy that can make the network to operate in a double blind manner.

Bandwidth: When operating on a bandwidth, it is essential to have performance that does not cower down from redundancy ramifications due to various issues like network congestion when many clients are relying on one service. This cloud startup offers a worthy alternative, going by the technical name of Advanced Redundancy Removal (AAR), where the technological tools compress memory and duplicate data. The upshot of this is the fact that bandwidth usage goes down to a level of 98 percent. This means that users can finally avail cloud-like benefits where they can scale their bandwidth, due to its economy of up to 20 to 50 times more than recently. This is where the secret of accessing a Local Area Network effect in WAN comes in.

Fatpipe




Startup Resources

Virtual office space , Startup Resources
http://www.thestartupcentre.com/

http://www.tsctribe.co/events/
http://www.madrasmade.net/
http://www.in50hrs.com/
http://www.dosamatic.com
http://www.ventunotech.com/
http://www.nexusvp.com/companies.asp
 http://www.aryaka.com/

Chart Library Java Script
http://www.fusioncharts.com

Sunday, April 27, 2014

Google Cloud Platform


With Google Cloud Platform, developers can build, test and deploy applications on Google's highly-scalable and reliable infrastructure. Choose from computing, storage, big data, and application services for your web, mobile and backend solutions. And check out our developer tools, which will reduce your time from start to deploy.
  • App Engine
  • BigQuery
  • Cloud Datastore
  • Cloud DNS
  • Compute Engine
  • Cloud SQL
  • Cloud Storage
  • Prediction API
  • Translate API

Google Compute Engine (GCE)


Follow up :

Run VMware environments in AWS or Google without modifying your VMs or networking

Ravello Systems Enables Enterprises to Accelerate Development of Complex Applications and Android Mobile Clients with Google Cloud Platform 

Google announced Compute Engine on June 28, 2012 at Google I/O 2012 in a limited preview mode. In April 2013, GCE was made available to customers with Gold Support Package. On February 25, 2013, Google announced that RightScale was their first reseller.[1] During Google I/O 2013, many features including sub-hour billing, shared-core instance types, larger persistent disks, enhanced SDN based networking capabilities and ISO 27001 certification got announced. GCE became available to everyone on May 15, 2013. Layer 3 load balancing came to GCE on August 7, 2013. Finally, on December 2, 2013, Google announced that GCE is generally available. It also expanded the OS support, enabled live migration of VMs, 16-core instances, faster persistent disks and lowered the price of standard instances. At the Google Cloud Platform Live event on 25 March 2014, Urs Hölzle, Senior VP of technical infrastructure announced sustained usage discounts, support for Microsoft Windows Server 2008 R2, Cloud DNS and Cloud Deployment Manager.



Pricing

All machine types are charged a minimum of 10 minutes. For example, if you run your instance for 2 minutes, you will be billed for 10 minutes of usage. After 10 minutes, instances are charged in 1 minute increments, rounded up to the nearest minute. For example, an instance that lives for 11.25 minutes will be charged for 12 minutes of usage.

Prices are effective April 1, 2014.

Machine Type Pricing

Standard

Instance type Virtual Cores Memory Price (US$)/Hour
(US hosted)
Price (US$)/Hour
(Europe hosted)
Price (US$)/Hour
(APAC hosted)
n1-standard-1 1 3.75GB $0.070 $0.077 $0.077
n1-standard-2 2 7.5GB $0.140 $0.154 $0.154
n1-standard-4 4 15GB $0.280 $0.308 $0.308
n1-standard-8 8 30GB $0.560 $0.616 $0.616
n1-standard-16 16 60GB $1.120 $1.232 $1.232

High Memory

Machines for tasks that require more memory relative to virtual cores
Instance type Virtual Cores Memory Price (US$)/Hour
(US hosted)
Price (US$)/Hour
(Europe hosted)
Price (US$)/Hour
(APAC hosted)
n1-highmem-2 2 13GB $0.164 $0.180 $0.180
n1-highmem-4 4 26GB $0.328 $0.360 $0.360
n1-highmem-8 8 52GB $0.656 $0.720 $0.720
n1-highmem-16 16 104GB $1.312 $1.440 $1.440

High CPU

Machines for tasks that require more virtual cores relative to memory
Instance type Virtual Cores Memory Price (US$)/Hour
(US hosted)
Price (US$)/Hour
(Europe hosted)
Price (US$)/Hour
(APAC hosted)
n1-highcpu-2 2 1.80GB $0.088 $0.096 $0.096
n1-highcpu-4 4 3.60GB $0.176 $0.192 $0.192
n1-highcpu-8 8 7.20GB $0.352 $0.384 $0.384
n1-highcpu-16 16 14.40GB $0.704 $0.768 $0.768

Shared Core

Machines for tasks that don't require a lot of resources but do have to remain online for long periods of time.
Instance type Virtual Cores Memory Price (US$)/Hour
(US hosted)
Price (US$)/Hour
(Europe hosted)
Price (US$)/Hour
(APAC hosted)
f1-micro 1 0.60GB $0.013 $0.014 $0.014
g1-small 1 1.70GB $0.035 $0.0385 $0.0385

Sustained Use Discounts

Once you use an instance for over 25% of a billing cycle, your price starts dropping. This discount is applied automatically, with no sign-up or up-front commitment required. If you use an instance for 100% of the billing cycle, you get a 30% net discount over our already low prices.
More details

Network Pricing

Ingress Free
Egress to the same Zone. Free
Egress to a different Cloud service within the same Region. Free
Egress to Google products (such as YouTube, Maps, Drive). Free *
Egress to a different Zone in the same Region (per GB) $0.01
Egress to a different Region within the US $0.01 *
Inter-continental Egress At Internet Egress Rate
Internet Egress (Americas/EMEA destination) per GB
0-1 TB in a month $0.12
1-10 TB $0.11
10+ TB $0.08
Internet Egress (APAC destination) per GB
0-1 TB in a month $0.21
1-10 TB $0.18
10+ TB $0.15

Load Balancing and Protocol Forwarding

US Europe APAC
Hourly service charge $0.025 (5 rules included)
$0.010 per additional rule
$0.028 (5 rules included)
$0.011 per additional rule
$0.028 (5 rules included)
$0.011 per additional rule
Per GB of data processed $0.008 $0.009 $0.009

Persistent Disk Pricing

Provisioned space $0.04 GB / month
Snapshot storage $0.125 GB / month
IO operations No additional charge

Image Storage

Image storage $0.085 GB / month

IP Address Pricing

Static IP address (assigned but unused) $0.01 / hour
Static IP address (assigned and in use) Free
Ephemeral IP address (attached to instance) Free
   * promotional pricing

Documentation & Resources

Friday, April 25, 2014

Cisco UCS pricing: It's complicated



Network World - As with any server product, there are lots of ways to configure UCS, including different levels of CPU, memory and storage. Cisco has a 29-page document to help you get it right, and 29 pages are not overkill. To get an idea of what this might cost, we configured two separate systems: one with 40 dual-socket blades, and another with 80 of the same blades.
We picked Intel 5600-series (Westmere-EP) X5675 CPUs, each with six cores running at 3.06 GHz, an expensive but pretty common choice for enterprise virtualization workloads. We also packed in 96GB of memory for each system, and put in only a single small SATA drive for booting, logging and diagnostics.
The list price for the 40 blade system was about $950,000 ($23,850 per blade, or $1,987 per core) and for the 80 blade system about $1,850,000 ($22,980 per blade, or $1,915 per core). Cisco was quick to remind us that deals of this size are routinely discounted 40% to 50%, taking the totals down to $525,000 ($13,117 per blade, or $1,093 per core) for 40 blades and $1,011,000 ($12,637 per blade, or $1,053 per core) for the 80 blade system.
We also calculated the "UCS tax," by comparing the cost of the blades (CPUs, memory, hard drive, network cards) and non-UCS networking alternatives against the total cost of the UCS integrated system. We found that UCS has a "tax" of about 15%, meaning that you're paying about 15% more to have the benefits of blade servers and integrated storage/data networking compared to just going do-it-yourself with 1U servers, standalone switches, and, in the case of the 80-blade system, 280 more patch cords.

The Nutanix Bible

 Courtesy : http://stevenpoitras.com/the-nutanix-bible/

 

The Nutanix Bible

  1. Intro

  2. Book of Nutanix

  3. Book of vSphere

  4. Book of Hyper-V

  5. Revisions

Intro

Welcome to The Nutanix Bible!  I work the with Nutanix platform on a daily basis – trying to find issues, push its limits as well as administer it for my production benchmarking lab.  This page is being produced to serve as a living document outlining tips and tricks used every day by myself and a variety of engineers at Nutanix.  This will also include summary items discussed as part of the Advanced Nutanix series.  NOTE: This is not an official reference so tread at your own risk!

Book of Nutanix

Architecture

Converged Platform

The Nutanix solution is a converged storage + compute solution which leverages local components and creates a distributed platform for virtualization aka virtual computing platform. The solution is a bundled hardware + software appliance which houses 2 (6000/7000 series) or 4 nodes (1000/2000/3000/3050 series) in a 2U footprint.
Each node runs an industry standard hypervisor (ESXi, KVM, Hyper-V currently) and the Nutanix Controller VM (CVM).  The Nutanix CVM is what runs the Nutanix software and serves all of the I/O operations for the hypervisor and all VMs running on that host.  For the Nutanix units running VMware vSphere, the SCSI controller, which manages the SSD and HDD devices, is directly passed to the CVM leveraging VM-Direct Path (Intel VT-d).  In the case of Hyper-V the storage devices are passed through to the CVM.
Below is an example of what a typical node logically looks like: NDFS_NodeDetail Together, a group of Nutanix Nodes forms a distributed platform called the Nutanix Distributed Filesystem (NDFS).  NDFS appears to the hypervisor like any centralized storage array, however all of the I/Os are handled locally to provide the highest performance.  More detail on how these nodes form a distributed system can be found below.
Below is an example of how these Nutanix nodes form NDFS: CVM_Dist

Cluster Components

The Nutanix platform is composed of the following high-level components: Cluster_Components
Cassandra
  • Key Role: Distributed metadata store
  • Description: Medusa stores and manages all of the cluster metadata in a distributed ring like manner based upon a heavily modified Apache Cassandra.  The Paxos algorithm is utilized to enforce strict consistency.  This service runs on every node in the cluster.  Cassandra is accessed via an interface called Medusa.
Zookeeper
  • Key Role: Cluster configuration manager
  • Description: Zeus stores all of the cluster configuration including hosts, IPs, state, etc. and is based upon Apache Zookeeper.  This service runs on three nodes in the cluster, one of which is elected as a leader.  The leader receives all requests and forwards them to the peers.  If the leader fails to respond a new leader is automatically elected.   Zookeeper is accessed via an interface called Zeus.
Stargate
  • Key Role: Data I/O manager
  • Description: Stargate is responsible for all data management and I/O operations and is the main interface from the hypervisor (via NFS, iSCSI or SMB).  This service runs on every node in the cluster in order to serve localized I/O.
Curator
  • Key Role: Map reduce cluster management and cleanup
  • Description: Curator is responsible for managing and distributing tasks throughout the cluster including disk balancing, proactive scrubbing, and many more items.  Curator runs on every node and is controlled by an elected Curator Master who is responsible for the task and job delegation.
Prism
  • Key Role: UI and API
  • Description: Prism is the management gateway for component and administrators to configure and monitor the Nutanix cluster.  This includes Ncli, the HTML5 UI and REST API.  Prism runs on every node in the cluster and uses an elected leader like all components in the cluster.
Genesis
  • Key Role: Cluster component & service manager
  • Description:  Genesis is a process which runs on each node and is responsible for any services interactions (start/stop/etc.) as well as for the initial configuration.  Genesis is a process which runs independently of the cluster and does not require the cluster to be configured/running.  The only requirement for genesis to be running is that Zookeeper is up and running.  The cluster_init and cluster_status pages are displayed by the genesis process.
Chronos
  • Key Role: Job and Task scheduler
  • Description: Chronos is responsible for taking the jobs and tasks resulting from a Curator scan and scheduling/throttling tasks among nodes.  Chronos runs on every node and is controlled by an elected Chronos Master who is responsible for the task and job delegation and runs on the same node as the Curator Master.
Cerebro
  • Key Role: Replication/DR manager
  • Description: Cerebro is responsible for the replication and DR capabilities of NDFS.  This includes the scheduling of snapshots, the replication to remote sites, and the site migration/failover.  Cerebro runs on every node in the Nutanix cluster and all nodes participate in replication to remote clusters/sites.
Pithos
  • Key Role: vDisk configuration manager
  • Description: Pithos is responsible for vDisk (NDFS file) configuration data.  Pithos runs on every node and is built on top of Cassandra.

Data Structure

The Nutanix Distributed Filesystem is composed of the following high-level structs:
Storage Pool
  • Key Role: Group of physical devices
  • Description: A storage pool is a group of physical storage devices including PCIe SSD, SSD, and HDD devices for the cluster.  The storage pool can span multiple Nutanix nodes and is expanded as the cluster scales.  In most configurations only a single storage pool is leveraged.
Container
  • Key Role: Group of VMs/files
  • Description: A container is a logical segmentation of the Storage Pool and contains a group of VM or files (vDisks).  Some configuration options (eg. RF) are configured at the container level, however are applied at the individual VM/file level.  Containers typically have a 1 to 1 mapping with a datastore (in the case of NFS/SMB).
vDisk
  • Key Role: vDisk
  • Description: A vDisk is any file over 512KB on NDFS including .vmdks and VM hard disks.  vDisks are composed of extents which are grouped and stored on disk as an extent group.
Below we show how these map between NDFS and the hypervisor: SP_structure
Extent
  • Key Role: Logically contiguous data
  • Description: A extent is a 1MB piece of logically contiguous data which consists of n number of contiguous blocks (varies depending on guest OS block size).  Extents are written/read/modified on a sub-extent basis (aka slice) for granularity and efficiency.  An extent’s slice may be trimmed when moving into the cache depending on the amount of data being read/cached.
Extent Group
  • Key Role: Physically contiguous stored data
  • Description: A extent group is a 1MB or 4MB piece of physically contiguous stored data.  This data is stored as a file on the storage device owned by the CVM.  Extents are dynamically distributed among extent groups to provide data striping across nodes/disks to improve performance.  NOTE: as of 4.0 extent groups can now be either 1MB or 4MB depending on dedupe.
Below we show how these structs relate between the various filesystems:
NDFS_DataLayout_Text

Here is another graphical representation of how these units are logically related:
NDFS_DataLayout_Graphical

I/O Path Overview

The Nutanix I/O path is composed of the following high-level components: ExtentCache
OpLog
  • Key Role: Persistent write buffer
  • Description: The Oplog is similar to a filesystem journal and is built to handle bursty writes, coalesce them and then sequentially drain the data to the extent store.  Upon a write the OpLog is synchronously replicated to another n number of CVM’s OpLog before the write is acknowledged for data availability purposes.  All CVM OpLogs partake in the replication and are dynamically chosen based upon load.  The OpLog is stored on the SSD tier on the CVM to provide extremely fast write I/O performance, especially for random I/O workloads.  For sequential workloads the OpLog is bypassed and the writes go directly to the extent store.  If data is currently sitting in the OpLog and has not been drained, all read requests will be directly fulfilled from the OpLog until they have been drain where they would then be served by the extent store/content cache.  For containers where fingerprinting (aka Dedupe) has been enabled, all write I/Os will be fingerprinted using a hashing scheme allowing them to be deduped based upon fingerprint in the content cache.
Extent Store
  • Key Role: Persistent data storage
  • Description: The Extent Store is the persistent bulk storage of NDFS and spans SSD and HDD and is extensible to facilitate additional devices/tiers.  Data entering the extent store is either being A) drained from the OpLog or B) is sequential in nature and has bypassed the OpLog directly.  Nutanix ILM will determine tier placement dynamically based upon I/O patterns and will move data between tiers.
Content Cache
  • Key Role: Dynamic read cache
  • Description: The Content Cache (aka “Elastic Dedupe Engine”) is a deduped read cache which spans both the CVM’s memory and SSD.  Upon a read request of data not in the cache (or based upon a particular fingerprint) the data will be placed in to the single-touch pool of the content cache which completely sits in memory where it will use LRU until it is ejected from the cache.  Any subsequent read request will “move” (no data is actually moved, just cache metadata) the data into the memory portion of the multi-touch pool which consists of both memory and SSD.  From here there are two LRU cycles, one for the in-memory piece upon which eviction will move the data to the SSD section of the multi-touch pool where a new LRU counter is assigned.  Any read request for data in the multi-touch pool will cause the data to go to the peak of the multi-touch pool where it will be given a new LRU counter.  Fingerprinting is configured at the container level and can be configured via the UI.  By default fingerprinting is disabled.
  • Below we show a high-level overview of the Content Cache:
CC_Pools
Extent Cache
  • Key Role: In-memory read cache
  • Description: The Extent Cache is an in-memory read cache that is completely in the CVM’s memory.  This will store non-fingerprinted extents for containers where fingerprinting and dedupe is disabled.  As of version 3.5 this is separate from the Content Cache, however these will be merging in a subsequent release.

How It Works

Data Protection

The Nutanix platform currently uses a resiliency factor aka replication factor (RF) and checksum to ensure data redundancy and availability in the case of a node or disk failure or corruption.  As explained above the OpLog acts as a staging area to absorb incoming writes onto a low-latency SSD tier.  Upon being written to the local OpLog the data is synchronously replicated to another one or two Nutanix CVM’s OpLog (dependent on RF) before being acknowledged (Ack) as a successful write to the host.  This ensures that the data exists in at least two or three independent locations and is fault tolerant.
NOTE: For RF3 a minimum of 5 nodes is required since metadata will be RF5.  Data RF is configured via Prism and is done at the container level.
All nodes participate in OpLog replication to eliminate any “hot nodes” and ensuring linear performance at scale.  While the data is being written a checksum is computed and stored as part of its metadata. Data is then asynchronously drained to the extent store where the RF is implicitly maintained.  In the case of a node or disk failure the data is then re-replicated among all nodes in the cluster to maintain the RF.  Any time the data is read the checksum is computed to ensure the data is valid.  In the event where the checksum and data don’t match the replica of the data will be read and will replace the non-valid copy.
Below we show an example of what this logically looks like:
NDFS_OplogReplication

Data Locality

Being a converged (compute+storage) platform, I/O and data locality is key to cluster and VM performance with Nutanix.  As explained above in the I/O path, all read/write IOs are served by the local Controller VM (CVM) which is on each hypervisor adjacent to normal VMs.  A VM’s data is served locally from the CVM and sits on local disks under the CVM’s control.  When a VM is moved from one hypervisor node to another (or during a HA event) the newly migrated VM’s data will be served by the now local CVM.
When reading old data (stored on the now remote node/CVM) the I/O will be forwarded by the local CVM to the remote CVM.  All write I/Os will occur locally right away.  NDFS will detect the I/Os are occurring from a different node and will migrate the data locally in the background allowing for all read I/Os to now be served locally.  The data will only be migrated on a read as to not flood the network.
Below we show an example of how data will “follow” the VM as it moves between hypervisor nodes: NDFS_Locality3

Scalable Metadata

Metadata is at the core of any intelligent system and is even more critical for any filesystem or storage array.  In terms of NDFS there are a few key structs that are critical for its success: it has to be right 100% of the time (aka. “strictly consistent”), it has to be scalable,  and it has to perform, at massive scale.  As mentioned in the architecture section above, NDFS utilizes a “ring like” structure as a key-value store which stores essential metadata as well as other platform data (eg. stats, etc.).
In order to ensure metadata availability and redundancy a RF is utilized among an odd amount of nodes (eg. 3, 5, etc.). Upon a metadata write or update the row is written to a node in the ring and then replicated to n number of peers (where n is dependent on cluster size).  A majority of nodes must agree before anything is committed which is enforced using the paxos algorigthm.  This ensures strict consistency for all data and metadata stored as part of the platform.
Below we show an example of a metadata insert/update for a 4 node cluster:
NDFS_Ring
Performance at scale is also another important struct for NDFS metadata.  Contrary to traditional dual-controller or “master” models, each Nutanix node is responsible for a subset of the overall platform’s metadata.  This eliminates the traditional bottlenecks by allowing metadata to be served and manipulated by all nodes in the cluster.  A consistent hashing scheme is utilized to minimize the redistribution of keys during cluster size modifications (aka. “add/remove node”) When the cluster scales (eg. from 4 to 8 nodes), the nodes are inserted throughout the ring between nodes for “block awareness” and reliability.
Below we show an example of the metadata “ring” and how it scales: Cassandra_ring

Shadow Clones

The Nutanix Distributed Filesystem has a feature called ‘Shadow Clones’ which allows for distributed caching of particular vDisks or VM data which is in a ‘multi-reader’ scenario.  A great example of this is during a VDI deployment many ‘linked clones’ will be forwarding read requests to a central master or ‘Base VM’.  In the case of VMware View this is called the replica disk and is read by all linked clones and in XenDesktop this is called the MCS Master VM.  This will also work in any scenario which may be a multi-reader scenario (eg. deployment servers, repositories, etc.).
Data or I/O locality is critical for the highest possible VM performance and a key struct of NDFS.  With Shadow Clones, NDFS will monitor vDisk access trends similar to what it does for data locality.  However in the case there are requests occurring from more than two remote CVMs (as well as the local CVM), and all of the requests are read I/O, the vDisk will be marked as immutable.  Once the disk has been marked as immutable the vDisk can then be cached locally by each CVM making read requests to it (aka Shadow Clones of the base vDisk). This will allow VMs on each node to read the Base VM’s vDisk locally.
In the case of VDI, this means the replica disk can be cached by each node and all read requests for the base will be served locally.  NOTE:  The data will only be migrated on a read as to not flood the network and allow for efficient cache utilization.  In the case where the Base VM is modified the Shadow Clones will be dropped and the process will start over.  Shadow clones are disabled by default (as of 3.5) and can be enabled/disabled using the following NCLI command: ncli cluster edit-params enable-shadow-clones=true.
Below we show an example of how Shadow Clones work and allow for distributed caching: ndfs_shadowclone_14pt

Elastic Dedupe Engine

The Elastic Dedupe Engine is a software based feature of NDFS which allows for data deduplication in the capacity (HDD) and performance (SSD/Memory) tiers.  Sequential streams of data are fingerprinted during ingest using a SHA-1 hash at a 16K granularity.  This fingerprint is only done on data ingest and is then stored persistently as part of the written block’s metadata.  NOTE: Initially a 4K granularity was used for fingerprinting, however after testing 16K offered the best blend of dedupability with reduced metadata overhead.  When deduped data is pulled into the cache this is done at 4K.
Contrary to traditional approaches which utilize background scans, requiring the data to be re-read, Nutanix performs the fingerprint in-line on ingest.  For duplicate data that can be deduplicated in the capacity tier the data does not need to be scanned or re-read, essentially duplicate copies can be removed.
Below we show an example of how the Elastic Dedupe Engine scales and handles local VM I/O requests:
NDFS_EDE_OnDisk2
Fingerprinting is done during data ingest of data with an I/O size of 64K or greater.  Intel acceleration is leveraged for the SHA-1 computation which accounts for very minimal CPU overhead.  In cases where fingerprinting is not done during ingest (eg. smaller I/O sizes), fingerprinting can be done as a background process. The Elastic Deduplication Engine spans both the capacity disk tier (HDD), but also the performance tier (SSD/Memory).  As duplicate data is determined, based upon multiple copies of the same fingerprints, a background process will remove the duplicate data using the NDFS Map Reduce framework (curator).
For data that is being read, the data will be pulled into the NDFS Content Cache which is a multi-tier/pool cache.  Any subsequent requests for data having the same fingerprint will be pulled directly from the cache.  To learn more about the Content Cache and pool structure, please refer to the ‘Content Cache’ sub-section in the I/O path overview, or click HERE.
Below we show an example of how the Elastic Dedupe Engine interacts with the NDFS I/O path: ContentCache_dedupe

Networking and I/O

The Nutanix platform does not leverage any backplane for inter-node communication and only relies on a standard 10GbE network.  All storage I/O for VMs running on a Nutanix node is handled by the hypervisor on a dedicated private network.  The I/O request will be handled by the hypervisor which will then forward the request to the private IP on the local CVM.  The CVM will then perform the remote replication with other Nutanix nodes using its external IP over the public 10GbE network.
For all read requests these will be served completely locally in most cases and never touch the 10GbE network. This means that the only traffic touching the public 10GbE network will be NDFS remote replication traffic and VM network I/O.  There will however be cases where the CVM will forward requests to other CVMs in the cluster in the case of a CVM being down or data being remote.  Also, cluster wide tasks such as disk balancing will temporarily generate I/O on the 10GbE network.
Below we show an example of how the VM’s I/O path interacts with the private and public 10GbE network: NDFS_Network

CVM Autopathing

Reliability and resiliency is a key, if not the most important, piece to NDFS.  Being a distributed system NDFS is built to handle component, service and CVM failures.  In this section I’ll cover how CVM “failures” are handled (I’ll cover how we handle component failures in future update).  A CVM “failure” could include a user powering down the CVM, a CVM rolling upgrade, or any event which might bring down the CVM.
NDFS has a feature called autopathing where when a local CVM becomes unavailable the I/Os are then transparently handled by other CVMs in the cluster. The hypervisor and CVM communicate using a private 192.168.5.0 network on a dedicated vSwitch (more on this above).  This means that for all storage I/Os these are happening to the internal IP addresses on the CVM (192.168.5.2).  The external IP address of the CVM is used for remote replication and for CVM communication.
Below we show an example of what this looks like: Node_net_IO   In the event of a local CVM failure the local 192.168.5.2 addresses previously hosted by the local CVM is unavailable.  NDFS will automatically detect this outage and will redirect these I/Os to another CVM in the cluster over 10GbE.  The re-routing is done transparently to the hypervisor and VMs running on the host.  This means that even if a CVM is powered down the VMs will still continue to be able to perform I/Os to NDFS.  NDFS is also self-healing meaning it will detect the CVM has been powered off and will automatically reboot or power-on the local CVM.  Once the local CVM is back up and available, traffic will then seamlessly be transferred back and served by the local CVM.
Below we show a graphical representation of how this looks for a failed CVM: IO_Autopath2

Disk Balancing

NDFS is designed to be a very dynamic platform which can react to various workloads as well as allow heterogeneous node types: compute heavy (3050, etc.) and storage heavy (60X0, etc.) to be mixed in a single cluster.  Ensuring uniform distribution of data is an important item when mixing nodes with larger storage capacities.
NDFS has a native feature called disk balancing which is used to ensure uniform distribution of data throughout the cluster.  Disk balancing works on a node’s utilization of its local storage capacity and is integrated with NDFS ILM.  Its goal is to keep utilization uniform among nodes once the utilization has breached a certain threshold.
Below we show an example of a mixed cluster (3050 + 6050) in a “unbalanced” state: NDFS_Diskbalancing_unbalanced Disk balancing leverages the NDFS Curator framework and is run as a scheduled process as well as when a threshold has been breached (eg. local node capacity utilization > n %).  In the case where the data is not balanced Curator will determine which data needs to be moved and will distribute the tasks to nodes in the cluster. In the case where the node types are homogeneous (eg. 3050) utilization should be fairly uniform.
However, if there are certain VMs running on a node which are writing much more data than others there can become a skew in the per node capacity utilization.  In this case disk balancing would run and move the coldest data on that node to other nodes in the cluster. In the case where the node types are heterogeneous (eg. 3050 + 6020/50/70), or where a node may be used in a “storage only” mode (not running any VMs), there will likely be a requirement to move data.
Below we show an example the mixed cluster after disk balancing has been run in a “balanced” state: NDFS_Diskbalancing_balanced In some scenarios customers might run some nodes in a “storage only” state where only the CVM will run on the node whose primary purpose is bulk storage capacity.  In this case the full nodes memory can be added to the CVM to provide a much larger read cache.
Below we show an example of how a storage only node would look in a mixed cluster with disk balancing moving data to it from the active VM nodes: NDFS_Diskbalancing_storage

Software-Defined Controller Architecture

As mentioned above (likely numerous times), the Nutanix platform is a software based solution which ships as a bundled software + hardware appliance.  The controller VM is where the vast majority of the Nutanix software and logic sits and was designed from the beginning to be an extensible and pluggable architecture.
A key benefit to being software defined and not relying upon any hardware offloads or constructs is around extensibility.  Like with any product life cycle there will always be advancements and new features which are introduced.  By not relying on any custom ASIC/FPGA or hardware capabilities, Nutanix can develop and deploy these new features through a simple software update.  This means that the deployment of a new feature (say deduplication) can be deployed by upgrading the current version of the Nutanix software.  This also allows newer generation features to be deployed on legacy hardware models.
For example, say you’re running a workload running an older version of Nutanix software on a prior generation hardware platform (eg. 2400).  The running software version doesn’t provide deduplication capabilities which your workload could benefit greatly from.  To get these features you perform a rolling upgrade of the Nutanix software version while the workload is running, and whala you now have deduplication.  It’s really that easy.
Similar to features, the ability to create new “adapters” or interfaces into NDFS is another key capability.  When the product first shipped it solely supported iSCSI for I/O from the hypervisor, this has now grown to include NFS and SMB.  In the future there is the ability to create new adapters for various workloads and hypervisors (HDFS, etc.).  And again, all deployed via a software update.
This is contrary to mostly all legacy infrastructures as a hardware upgrade or software purchase was normally required to get the “latest and greatest” features.  With Nutanix it’s different, since all features are deployed in software they can run on any hardware platform, any hypervisor and be deployed through simple software upgrades.
Below we show a logical representation of what this software-defined controller framework looks like: SD_Controller_Arch2

Storage Tiering and Prioritization

The Disk Balancing section above talked about how storage capacity was pooled among all nodes in a Nutanix cluster and that ILM would be used to keep hot data local.  A similar concept applies to disk tiering in which the cluster’s SSD and HDD tiers are cluster wide and NDFS ILM is responsible for triggering data movement events.
A local node’s SSD tier is always the highest priority tier for all I/O generated by VMs running on that node, however all of the cluster’s SSD resources are made available to all nodes within the cluster.  The SSD tier will always offer the highest performance and is a very important thing to manage for hybrid arrays.
The tier prioritization can be classified at a high-level by the following: NDFS_Tier_HighLevel2 Specific types of resources (eg. SSD, HDD, etc.) are pooled together and form a cluster wide storage tier.  This means that any node within the cluster can leverage the full tier capacity, regardless if it is local or not.
Below we show a high level example of how this pooled tiering looks: NDFS_Tier_Pooling A common question is what happens when a local node’s SSD becomes full?  As mentioned in the Disk Balancing section a key concept is trying to keep uniform utilization of devices within disk tiers.  In the case where a local node’s SSD utilization is high, disk balancing will kick in to move the coldest data on the local SSDs to the other SSDs throughout the cluster.  This will free up space on the local SSD to allow the local node to write to SSD locally instead of going over the network.  A key point to mention is that all CVMs and SSDs are used for this remote I/O to eliminate any potential bottlenecks and remediate some of the hit by performing I/O over the network. NDFS_Tier_Utilization2 The other case is when the overall tier utilization breaches a specific threshold [curator_tier_usage_ilm_threshold_percent (Default=75)] where NDFS ILM will kick in and as part of a Curator job will down-migrate data from the SSD tier to the HDD tier.  This will bring utilization within the threshold mentioned above or free up space by the following amount [curator_tier_free_up_percent_by_ilm (Default=15)], whichever is greater.
The data for down-migration is chosen using last access time. In the case where the SSD tier utilization is 95%, 20% of the data in the SSD tier will be moved to the HDD tier (95% –> 75%).  However, if the utilization was 80% only 15% of the data would be moved to the HDD tier using the minimum tier free up amount. NDFS_Tier_DownMigration NDFS ILM will constantly monitor the I/O patterns and (down/up)-migrate data as necessary as well as bring the hottest data local regardless of tier.

Storage Layers and Monitoring

The Nutanix platform monitors storage at multiple layers throughout the stack ranging from the VM/Guest OS all the way down to the physical disk devices.  Knowing the various tiers and how these relate is important whenever monitoring the solution and allows you to get full visibility of how the ops relate.
Below we show the various layers of where operations are monitored and the relative granularity which are explained below: NDFS_MetricsTiers3
Virtual Machine Layer
  • Key Role: Metrics reported by the Guest OS
  • Description: Virtual Machine or Guest OS level metrics are pulled directly from the hypervisor and represent the performance the Guest OS is seeing and is indicative of the I/O performance the application is seeing.
  • When to use: When troubleshooting or looking for OS or application level detail
Hypervisor Layer
  • Key Role: Metrics reported by the Hypervisor(s)
  • Description: Hypervisor level metrics are pulled directly from the hypervisor and represent the most accurate metrics the hypervisor(s) are seeing.  This data can be viewed for one of more hypervisor node(s) or the aggregate cluster.  This layer will provide the most accurate data in terms of what performance the platform is seeing and should be leveraged in most cases.  In certain scenarios the hypervisor may combine or split operations coming from VMs which can show the difference in metrics reported by the VM and hypervisor.  These numbers will also include cache hits served by the Nutanix CVMs.
  • When to use: Most common cases as this will provide the most detailed and valuable metrics
Controller Layer
  • Key Role: Metrics reported by the Nutanix Controller(s)
  • Description: Controller level metrics are pulled directly from the Nutanix Controller VMs (eg. Stargate 2009 page) and represent what the Nutanix front-end is seeing from NFS/SMB/iSCSI or any back-end operations (eg. ILM, disk balancing, etc.).  This data can be viewed for one of more Controller VM(s) or the aggregate cluster.  The metrics seen by the Controller Layer should match those seen by the hypervisor layer, however will include any backend operations (eg. ILM, disk balancing).  These numbers will also include cache hits served by memory.
  • When to use: Similar to the hypervisor layer, can be used to show how much backend operation is taking place
Disk Layer
  • Key Role: Metrics reported by the Disk Device(s)
  • Description: Disk level metrics are pulled directly from the physical disk devices (via the CVM) and represent what the back-end is seeing.  This includes data hitting the OpLog or Extent Store where an I/O is performed on the disk.  This data can be viewed for one of more disk(s), the disk(s) for a particular node or the aggregate disks in the cluster.  In common cases it is expected that the disk ops should match the number of incoming writes as well as reads not served from the memory portion of the cache.  Any reads being served by the memory portion of the cache will not be counted here as the op is not hitting the disk device.
  • When to use: When looking to see how many ops are served from cache or hitting the disks

APIs and Interfaces

Core to any dynamic or “Software Defined” environment, Nutanix provides a vast array of interfaces allowing for simple programmability and interfacing. Here are the main interfaces:
  • REST API
  • NCLI
  • Scripting interfaces – more coming here soon :)
Core to this is the REST API which exposes every capability and data point of the Prism UI and allows for orchestration or automation tools to easily drive Nutanix action.  This enables tools like VMware’s vCAC or Microsoft’s System Center Orchestrator to easily create custom workflows for Nutanix. Also, this means that any 3rd party developer could create their own custom UI and pull in Nutanix data via REST.
Below we show a small snippet of the Nutanix REST API explorer which allows developers to see the API and format: RestAPI Operations can be expanded to display details and examples of the REST call: RestAPI2

Availability Domains

Availability Domains aka node/block/rack awareness is a key struct for distributed systems to abide by for determining component and data placement.  NDFS is currently node and block aware, however this will increase to rack aware as cluster sizes grow.  Nutanix refers to a “block” as the chassis which contains either one, two or four server “nodes”.
NOTE: at minimum of 3 blocks must be utilized for block awareness to be activated, otherwise node awareness will be defaulted to.  It is recommended to utilized uniformly populated blocks to ensure block awareness is enabled.  Common scenarios and the awareness level utilized can be found at the bottom of this section.  The 3 block requirement is due to ensure quorum.
For example a 3450 would be a block which holds 4 nodes.  The reason for distributing roles or data across blocks to ensure if a block fails or needs maintenance the system can continue to run without interruption.  NOTE: Within a block the redundant PSU and fans are the only shared components
Awareness can be broken into a few key focus areas:
  • Data (The VM data)
  • Metadata (Cassandra)
  • Configuration Data (Zookeeper)
Data
With NDFS data replicas will be written to other blocks in the cluster to ensure that in the case of a block failure or planned downtime, the data remains available.  This is true for both RF2 and RF3 scenarios as well as in the case of a block failure.
An easy comparison would be “node awareness” where a replica would need to be replicated to another node which will provide protection in the case of a node failure.  Block awareness further enhances this by providing data availability assurances in the case of block outages.
Below we show how the replica placement would work in a 3 block deployment:
NDFS_BlockAwareness_DataNorm
In the case of a block failure, block awareness will be maintained and the re-replicated blocks will be replicated to other blocks within the cluster:
NDFS_BlockAwareness_DataFail2
Metadata
As mentioned in the Scalable Metadata section above, Nutanix leverages a heavily modified Cassandra platform to store metadata and other essential information.  Cassandra leverages a ring-like structure and replicates to n number of peers within the ring to ensure data consistency and availability.
Below we show an example of the Cassandra ring for a 12 node cluster:
NDFS_CassandraRing_12Node3
Cassandra peer replication iterates through nodes in a clockwise manner throughout the ring.  With block awareness the peers are distributed among the blocks to ensure no two peers are on the same block.
Below we show an example node layout translating the ring above into the block based layout:
NDFS_CassandraRing_BlockLayout_Write2
With this block aware nature, in the event of a block failure there will still be at least two copies of the data (with Metadata RF3 – In larger clusters RF5 can be leveraged).
Below we show an example of all of the nodes replication topology to form the ring (yes – its a little busy):
NDFS_CassandraRing_BlockLayout_Full
Configuration Data
Nutanix leverages Zookeeper to store essential configuration data for the cluster.  This role is also distributed in a block aware manner to ensure availability in the case of a block failure.
Below we show an example layout showing 3 Zookeeper nodes distributed in a block aware manner:
NDFS_Zookeeper_BlockLayout
In the event of a block outage, meaning on of the Zookeeper nodes will be gone, the Zookeeper role would be transferred to another node in the cluster as shown below:
NDFS_Zookeeper_BlockLayout_Fail

Below we breakdown some common scenarios and what level of awareness will be utilized:
  • < 3 blocks –> NODE awareness
  • 3+ blocks uniformly populated –> BLOCK + NODE awareness
  • 3+ blocks not uniformly populated
    • If SSD tier variance between blocks is > max variance –> NODE awareness
      • Example: 2 x 3450 + 1 x 3150
    • If SSD tier variance between blocks is < max variance  –> BLOCK + NODE awareness
      • Example: 2 x 3450 + 1 x 3350
    • NOTE: max tier variance is calculated as: 100 / (RF+1)
      • Eg. 33% for RF2 or 25% for RF3

Administration

Important Pages

These are advanced Nutanix pages besides the standard user interface that allow you to monitor detailed stats and metrics.  The URLs are formatted in the following way: http://<Nutanix CVM IP/DNS>:<Port/path (mentioned below)>  Example: http://MyCVM-A:2009  NOTE: if you’re on a different subnet IPtables will need to be disabled on the CVM to access the pages.
# 2009 Page
  • This is a Stargate page used to monitor the back end storage system and should only be used by advanced users.  I’ll have a post that explains the 2009 pages and things to look for.
# 2009/latency Page
  • This is a Stargate page used to monitor the back end latency
# 2009/h/traces Page
  • This is the Stargate page used to monitor activity traces for operations
# 2010 Page
  • This is the Curator page which is used for monitoring curator runs
# 2010/master/control Page
  • This is the Curator control page which is used to manually start Curator jobs
# 2011 Page
  • This is the Chronos page which monitors jobs and tasks scheduled by curator
# 2020 Page
  •  This is the Cerebro page which monitors the protection domains, replication status and DR
# 7777 Page
  • This is the Aegis Portal page which can be used to get good logs and statistics, useful commands, and modify Gflags

Cluster Commands

# Check cluster status
 # Check local CVM service status
 # Nutanix cluster upgrade
 # Restart cluster service from CLI
 # Start cluster service from CLI
 # Restart local service from CLI
 # Start local service from CLI
 # Cluster add node from cmdline
 # Find number of vDisks
 # Find cluster id
 # Disable IPtables
# Check for Shadow Clones
 # Reset Latency Page Stats
 # Find Number of vDisks
 # Start Curator scan from CLI
 # Compact ring
 # Find NOS version
 # Find CVM version
 # Manually fingerprint vDisk(s)
 # Echo Factory_Config.json for all cluster nodes
  # Upgrade a single Nutanix node’s NOS version
 # Install Nutanix Cluster Check (NCC)
 # Run Nutanix Cluster Check (NCC)

NCLI

NOTE: All of these actions can be performed via the HTML5 GUI.  I just use these commands as part of my bash scripting to automate tasks.
# Add subnet to NFS whitelist
# Display Nutanix Version
 # Display hidden NCLI options
# List Storage Pools
 # List containers
 # Create container
 # List VMs
 # List public keys
 # Add public key
 # Remove public key
# Create protection domain
 # Create remote site
 # Create protection domain for all VMs in container
 # Create protection domain with specified VMs
# Create protection domain for NDFS files (aka vDisk)
 # Create snapshot of protection domain
 # Create snapshot and replication schedule to remote site
 # List replication status
# Migrate protection domain to remote site
 # Activate protection domain
 # Enable NDFS Shadow Clones
# Enable Dedup for vDisk

Metrics & Thresholds

The below will cover specific metrics and thresholds on the Nutanix back end.  More updates to these coming shortly!
2009 Stargate – Overview
2009-main
MetricExplanationThreshold/Target
Start timeThe start time of the Stargate service
Build versionThe build version currently running
Build last commit dateThe last commit date of the build
Stargate handleThe Stargate handle
iSCSI handleThe iSCSI handle
SVM idThe SVM id of Stargate
Incarnation id
Highest allocated opid
Highest contiguous completed opid
Extent cache hitsThe % of read requests served directly from the in-memory extent cache
Extent cache usageThe MB size of the extent cache
Content cache hitsThe % of read requests served directly from the content cache
Content cache flash pagein pct
Content cache memory usageThe MB size of the in-memory content cache
Content cache flash usageThe MB size of the SSD content cache
QoS Queue (size/admitted)The admission control queue size and number of admitted ops
Oplog QoS queue (size/admitted)The oplog queue size and number of admitted ops
NFS Flush Queue (size/admitted)
NFS cache usage
 2009 Stargate – Cluster State
2009-cluster_state
MetricExplanationThreshold/Target
SVM IdThe Id of the Controller
IP:portThe IP:port of the Stargate handle
Incarnation
SSD-PCIeThe SSD-PCIe devices and size/utilization
SSD-SATAThe SSD-SATA devices and size/utilization
DAS-SATAThe HDD-SATA devices  and size/utilization

Container IdThe Id of the container
Container NameThe Name of the container
Max capacity (GB) – Storage poolThe Max capacity of the storage pool
Max capacity (GB) – ContainerThe Max capacity of the container (will normally match the storage pool size)
Reservation (GB) – Total across vdisksThe reservation in GB across vdisks
Reservation (GB) – Admin provisioned
Container usage (GB) – TotalThe total usage in GB per container
Container usage (GB) – ReservedThe reservation used in GB per container
Container usage (GB) – Garbage
Unreserved available (GB) – ContainerThe available capacity in GB per container
Unreserved available (GB) – Storage poolThe available capacity in GB for the storage pool
2009 Stargate – NFS Slave
2009-NFSSlave
MetricExplanationThreshold/Target
Vdisk NameThe name of the Vdisk on NDFS
Unstable data – KB
Unstable data – Ops/s
Unstable data – KB/s
Outstanding Ops – ReadThe number of outstanding read ops for the Vdisk
Outstanding Ops – WriteThe number of outstanding write ops for the Vdisk
Ops/s – ReadThe number of current read operations per second for the Vdisk
Ops/s – WriteThe number of current write operations per second for the Vdisk
Ops/s – ErrorThe number of current error (failed) operations per second for the Vdisk
KB/s – ReadThe read throughput in KB/s for the Vdisk
KB/s – WriteThe write throughput in KB/s for the Vdisk
Avg latency (usec) – ReadThe average read op latecy in micro seconds for the Vdisk
Avg latency (usec) – WriteThe average write op latecy in micro seconds for the Vdisk
Avg op sizeThe average op size in bytes for the Vdisk
Avg outstandingThe average outstanding ops for the Vdisk
% busyThe % busy of the Vdisk

Container NameThe name of the container
Outstanding Ops – ReadThe number of outstanding read ops for the container
Outstanding Ops – WriteThe number of outstanding write ops for the container
Outstanding Ops – NS lookupThe number of oustanding NFS lookup ops for the container
Outstanding Ops – NS updateThe number of outstanding NFS update ops for the container
Ops/s – ReadThe number of current read operations per second for the container
Ops/s – WriteThe number of current write operations per second for the container
Ops/s – NS lookupThe number of current NFS lookup ops for the container
Ops/s – NS updateThe number of current NFS update ops for the container
Ops/s – ErrorThe number of current error (failed) operations per second for the container
KB/s – ReadThe read throughput in KB/s for the container
KB/s – WriteThe write throughput in KB/s for the container
Avg latency (usec) – ReadThe average read op latecy in micro seconds for the container
Avg latency (usec) – WriteThe average write op latecy in micro seconds for the container
Avg latency (usec) – NS lookupThe average NFS lookup latency in micro seconds for the container
Avg latency (usec) – NS updateThe average NFS lookup update in micro seconds for the container
Avg op sizeThe average op size in bytes for the container
Avg outstandingThe average outstanding ops for the container
% busyThe % busy of the container
2009 Stargate – Hosted VDisks
2009-hosted_vdisk
MetricExplanationThreshold/Target
Vdisk IdThe Id of the Vdisk on NDFS
Vdisk NameThe name of the Vdisk on NDFS
Usage (GB)The usage in GB per Vdisk
Dedup (GB)
Oplog – KBThe size of the Oplog for the Vdisk
Oplog – FragmentsThe number of fragments of the Oplog for the Vdisk
Oplog – Ops/sThe number of current opeations per second for the Vdisk
Oplog – KB/sThe throughput in KB/s for the Vdisk
Outstanding Ops – ReadThe number of outstanding read ops for the Vdisk
Outstanding Ops – WriteThe number of outstanding write ops for the Vdisk
Outstanding Ops – EstoreThe number of outstanding ops to the extent store for the Vdisk
Ops/s – ReadThe number of current read operations per second for the Vdisk
Ops/s – WriteThe number of current write operations per second for the Vdisk
Ops/s – ErrorThe number of current error (failed) operations per second for the Vdisk
Ops/s – Random
KB/s – ReadThe read throughput in KB/s for the Vdisk
KB/s – WriteThe write throughput in KB/s for the Vdisk
Avg latency (usec)The average op latency in micro seconds for the Vdisk
Avg op sizeThe average op size in bytes for the Vdisk
Avg qlenThe average queue length for the Vdisk
% busy
2009 Stargate – Extent Store
2009-extent_store
MetricExplanationThreshold/Target
Disk IdThe disk id of the physical device
Mount pointThe mount point of the physical device
Outstanding Ops – QoS QueueThe number of (primary/secondary) ops for the device
Outstanding Ops – ReadThe number of outstanding read ops for the device
Outstanding Ops – WriteThe number of outstanding write ops for the device
Outstanding Ops – Replicate
Outstanding Ops – Read Replica
Ops/s – ReadThe number of current read operations per second for the device
Ops/s – WriteThe number of current write operations per second for the device
Ops/s – ErrorThe number of current error (failed) operations per second for the device
Ops/s – Random
KB/s – ReadThe read throughput in KB/s for the device
KB/s – WriteThe write throughput in KB/s for the device
Avg latency (usec)The average op latency in micro seconds for the device
Avg op sizeThe average op size in bytes for the device
Avg qlenThe average queue length for the device
Avg qdelayThe average queue delay for the device
% busy
Size (GB)
Total usage (GB)The total usage in GB for the device
Unshared usage (GB)
Dedup usage (GB)
Garbage (GB)
EgroupsThe number of extent groups for the device
Corrupt EgroupsThe number of corrupt (bad) extent groups for the device

Gflags

Coming soon :)

Troubleshooting

# Find cluster error logs
 # Find cluster fatal logs

Book of vSphere

Architecture

To be input

How It Works

Array Offloads – VAAI

The Nutanix platform supports the VMware APIs for Arry Integration (VAAI) which allows the hypervisor to offload certain tasks to the array.  This is much more efficient as the hypervisor doesn’t need to be the “man in the middle”. Nutanix currently supports the VAAI primitives for NAS including the ‘full file clone’, ‘fast file clone’ and ‘reserve space’ primitives.  Here’s a good article explaining the various primitives: LINK.  For both the full and fast file clones a NDFS “fast clone” is done meaning a writable snapshot (using re-direct on write) for each clone is created.  Each of these clones has its own block map meaning that chain depth isn’t anything to worry about. The following will determine whether or not VAAI will be used for specific scenarios:
  • Clone VM with Snapshot –> VAAI will NOT be used
  • Clone VM without Snapshot which is Powered Off –> VAAI WILL be used
  • Clone VM to a different Datastore/Container –> VAAI will NOT be used
  • Clone VM which is Powered On  –> VAAI will NOT be used
These scenarios apply to VMware View:
  • View Full Clone (Template with Snapshot) –> VAAI will NOT be used
  • View Full Clone (Template w/o Snapshot) –> VAAI WILL be used
  • View Linked Clone (VCAI) –> VAAI WILL be used
You can validate VAAI operations are taking place by using the ‘NFS Adapter’ Activity Traces page.

Administration

To be input

Important Pages

To be input

Command Reference

# ESXi cluster upgrade
Performing a rolling reboot of ESXi hosts: For PowerCLI on automated hosts reboots, SEE HERE
# Restart ESXi host services
 # Display ESXi host nics in ‘Up’ state
 # Display ESXi host 10GbE nics and status
 # Display ESXi host active adapters
 # Display ESXi host routing tables
# Check if VAAI is enabled on datastore
 # Set VIB acceptance level to community supported
 # Install VIB
 # Check ESXi ramdisk space
 # Clear pynfs logs

Metrics & Thresholds

To be input

Troubleshooting

To be input

Book of Hyper-V

Architecture

To be input

How It Works

Array Offloads – ODX

The Nutanix platform supports the Microsoft Offloaded Data Transfers (ODX) which allows the hypervisor to offload certain tasks to the array.  This is much more efficient as the hypervisor doesn’t need to be the “man in the middle”. Nutanix currently supports the ODX primitives for SMB which include full copy and zeroing operations.  However contrary to VAAI which has a “fast file” clone operation (using writable snapshots) the ODX primitives do not have an equivalent and perform a full copy.  Given this, it is more efficient to rely on the native NDFS clones which can currently be invoked via nCLI, REST, or Powershell CMDlets
Currently ODX IS invoked for the following operations:
  • In VM or VM to VM file copy on NDFS SMB share
  • SMB share file copy
  • Deploy template from SCVMM Library (NDFS SMB share) - NOTE: Shares must be added to the SCVMM cluster using short names (eg. not FQDN).  An easy way to force this is to add an entry into the hosts file for the cluster (eg. 10.10.10.10     nutanix-130).
ODX is NOT invoked for the following operations:
  • Clone VM through SCVMM
  • Deploy template from SCVMM Library (non-NDFS SMB Share)
  • XenDesktop Clone Deployment
You can validate ODX operations are taking place by using the ‘NFS Adapter’ Activity Traces page (yes, I said NFS, even though this is being performed via SMB).  The operations activity show will be ‘NfsSlaveVaaiCopyDataOp‘ when copying a vDisk and ‘NfsSlaveVaaiWriteZerosOp‘ when zeroing out a disk.

Administration

To be input

Important Pages

To be input

Command Reference

# Execute command on multiple remote hosts
 # Check available VMQ Offloads
# Disable VMQ for VMs matching a specific prefix
 # Enable VMQ for VMs matching a certain prefix
 # Power-On VMs matching a certain prefix
 # Shutdown VMs matching a certain prefix
 # Stop VMs matching a certain prefix
 # Get Hyper-V host RSS settings

Metrics & Thresholds

To be input

Troubleshooting

To be input

Revisions

  1. 09-04-2013 | Initial Version
  2. 09-04-2013 | Updated with components section
  3. 09-05-2013 | Updated with I/O path overview section
  4. 09-09-2013 | Updated with converged architecture section
  5. 09-11-2013 | Updated with data structure section
  6. 09-24-2013 | Updated with data protection section
  7. 09-30-2013 | Updated with data locality section
  8. 10-01-2013 | Updated with shadow clones section
  9. 10-07-2013 | Updated with scalable metadata section
  10. 10-11-2013 | Updated with elastic dedupe engine
  11. 11-01-2013 | Updated with networking and I/O section
  12. 11-07-2013 | Updated with CVM autopathing section
  13. 01-23-2014 | Updated with new content structure and layout
  14. 02-10-2014 | Updated with storage layers and monitoring
  15. 02-18-2014 | Updated with array offloads sections
  16. 03-12-2014 | Updated with genesis
  17. 03-17-2014 | Updated spelling and grammar
  18. 03-19-2014 | Updated with apis and interfaces section
  19. 03-25-2014 | Updated script block formatting
  20. 03-26-2014 | Updated with dr & protection domain NCLI commands
  21. 04-15-2014 | Updated with failure domains section and 4.0 updates
  22. 04-23-2014 | Updated with command to echo factory_config.json

bee-social