Wednesday, March 26, 2014

Java 8

Oracle Java 1.8
Ubuntu
sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer

·         Oracle SE Embedded 8 - 11 MB embedded
·         Java Micro Edition (J2ME is back)
o   Couple of profiles available
o   Compact 1, 2 and 3
·         Lamba expressions
o   Sumatra project
o   Parallel programming for multi core
·         New Date API – wow finally, but this is one more time after faulty Date, Gregorian
·         Java Client
o   3d installer installs Java Fx automatically with classpath set
·         Removal of PERMGEN mem
o   About 200 bugs fixes
o   Huge performance improvements
o   New JVM measuring metrics APIs, management hooks


OpenJDK 1.8

JDK 8: General Availability

mark.reinhold at oracle.com mark.reinhold at oracle.com
Tue Mar 18 18:55:18 UTC 2014

I'm very pleased to announce that JDK 8 is finished -- two years,
seven months, and eighteen days after the release of JDK 7.
 
My deepest thanks to everyone who contributed to this monumental
release.
 
A few more thoughts: http://mreinhold.org/blog/jdk8-ga
 
- Mark




http://mreinhold.org/blog/jdk8-ga


Tuesday, March 25, 2014

Open vSwitch in a Hardware

I wanted to replace the broadcom switch with OVS and i started analyzing and found some interesting analysis.




" In networking, most cases open source locks you in similarly to non-open source software"

References

http://www.sdncentral.com/technology/vswitch-the-new-battleground-what-every-datacenter-operator-must-know/2012/07/

http://www.xflowresearch.com/http://www.lightreading.com/comms-chips/startup-finds-a-business-in-openflow/d/d-id/690939
http://benpfaff.org/writings/openvswitch/orr.html

http://www.sdncentral.com/market/sdn-myth-busters-we-test-5-common-sdn-myths-propagated-by-vendors/2012/10/

A response to Michael Orr

Michael Orr wrote the following comment on an article at sdncentral.com:
We wanted to use OVS on our Silicon Switch, and initially wanted to use it as both a HW-speed OpenFlow implementation and a general L2/L3 HW-based switch. We found out that OVS assumed you may want to us the HW as a wild-card match-element for OpenFlow, but is not really suited to using HW for other functionality (Bridging, routing, VLANs, LAGs, etc. etc.). To use HW for these you have to write your own OFPROTO, a major change, which will cause you to generate your own private fork, and split off from OVS main branch irrevocably.
Believing a HW-capable OVS is generally A Good Thing, We contacted Nicira, suggesting we do the work, and asking they review/approve and (most importantly) adopt the final result into the main branch. They refused. I am not even arguing here who was right, and the reasons for the decision—That's not the point. The point is that (at that time, not too long ago) we were placed in the exact dillema described in this post—use OVS as Nicira sees fit, or abandon the main branch of OVS and fork an incompatible version.
These paragraphs spread a lot of misinformation. I'll address them in two sections below.

Infeasibility of a worthwhile hybrid provider

First, let's address what Orr says is his point:
I am not even arguing here who was right, and the reasons for the decision—That's not the point.
The reason for the decision was essentially that we did not see how Orr's request could be solved in a worthwhile way. Marvell, in turn, did not (to my knowledge) ever attempt to constructively respond to our arguments.
It helps to have some background on how Open vSwitch is ported to a new platform. You can find the whole story in the PORTING file at the top of the Open vSwitch distribution. To summarize, there were (and are) two viable options for a port, either to write an “ofproto provider” or a “dpif provider.” PORTING summarizes these choices as:
  • Only an ofproto provider can take full advantage of hardware with built-in support for wildcards (e.g. an ACL table or a TCAM).
  • A dpif provider can take advantage of the Open vSwitch built-in implementations of bonding, LACP, 802.1ag, 802.1Q VLANs, and other features. An ofproto provider has to provide its own implementations, if the hardware can support them at all.
  • A dpif provider is usually easier to implement, but most appropriate for software switching. It “explodes” wildcard rules into exact-match entries. This allows fast hash lookups in software, but makes inefficient use of TCAMs in hardware that support wildcarding.
Orr and Marvell were looking for a third “hybrid” option that combines the advantages of both approaches. We also regarded this as a desirable goal, so last summer we spent hours brainstorming ways to achieve this goal. Anyone who has worked with me knows that, if you want me to spend hours in a meeting, then it has to be for something important.
We did regard this goal as important, so we spent some serious time to talk through and critique a number of ideas. We didn't rule anything out, including drastic changes to the Open vSwitch architecture. Again, anyone who has worked with me knows that I never rule out changes simply because they are large, as long as the benefit is equally large. But we didn't come up with an effective solution, and we did come up with a few issues that seemed insurmountable, so we reported that back to Marvell.
You don't have to just take my word for any of the above, though, because I documented our reasoning and our conclusions in a section of the PORTING file titled “Why OVS Does Not Support Hybrid Providers” that I committed to the Open vSwitch Git repository on July 15, 2011. It reads as follows:
The “Porting Strategies” section above describes the “ofproto provider” and “dpif provider” porting strategies. Only an ofproto provider can take advantage of hardware TCAM support, and only a dpif provider can take advantage of the OVS built-in implementations of various features. It is therefore tempting to suggest a hybrid approach that shares the advantages of both strategies.
However, Open vSwitch does not support a hybrid approach. Doing so may be possible, with a significant amount of extra development work, but it does not yet seem worthwhile, for the reasons explained below.
First, user surprise is likely when a switch supports a feature only with a high performance penalty. For example, one user questioned why adding a particular OpenFlow action to a flow caused a 1,058x slowdown on a hardware OpenFlow implementation [1]. The action required the flow to be implemented in software.
Given that implementing a flow in software on the slow management CPU of a hardware switch causes a major slowdown, software-implemented flows would only make sense for very low-volume traffic. But many of the features built into the OVS software switch implementation would need to apply to every flow to be useful. There is no value, for example, in applying bonding or 802.1Q VLAN support only to low-volume traffic.
Besides supporting features of OpenFlow actions, a hybrid approach could also support forms of matching not supported by particular switching hardware, by sending all packets that might match a rule to software. But again this can cause an unacceptable slowdown by forcing bulk traffic through software in the hardware switch's slow management CPU. Consider, for example, a hardware switch that can match on the IPv6 Ethernet type but not on fields in IPv6 headers. An OpenFlow table that matched on the IPv6 Ethernet type would perform well, but adding a rule that matched only UDPv6 would force every IPv6 packet to software, slowing down not just UDPv6 but all IPv6 processing.
[1] Aaron Rosen, “Modify packet fields extremely slow”, openflow-discuss mailing list, June 26, 2011, archived at https://mailman.stanford.edu/pipermail/openflow-discuss/2011-June/002386.html.
I see that an internal email I wrote about the above text says:
Please notice that this patch is on ovs-dev… That's why it doesn't, for example, name Marvell or Michael Orr or anyone else.
but since Orr is bringing up the issue in public I don't see why I shouldn't.
If Orr or Marvell ever responded to the above, for example to explain why they disagree or to propose another approach, then it never made it to me. I don't see any follow-up to it in my email archive.

Implementing an ofproto provider

Michael Orr writes:
To use HW for these you have to write your own OFPROTO, a major change, which will cause you to generate your own private fork, and split off from OVS main branch irrevocably.
The first part of this is true. Short of a feasible approach to a hybrid provider, one must write an “ofproto provider” to obtain the best performance with hardware.
The remainder does not make sense.
First, yes, an ofproto provider could be a significant amount of code, but that does not make it a major change. It's simply adding one or more source files that implement hardware-specific functionality in a hardware-specific way. The Open vSwitch source code is intentionally designed to make plugging in such a provider straightforward. The header file that describes the interface has almost 3 times as many comment lines as other lines, to make the interface as clear as possible.
It also does not make any sense in this context to talk about writing an ofproto provider as forcing a private fork of Open vSwitch. Regardless of the means that Marvell chooses to port Open vSwitch to its hardware, it would be creating a fork of Open vSwitch, because Marvell regards as proprietary the specifications and the APIs for the high-end switching chips for which Open vSwitch is relevant. No one who has not signed a non-disclosure agreement with Marvell would ever see the code.
Finally, claiming that to write an ofproto provider, even one that due to Marvell's business practices would necessarily be private, is to “split off from OVS main branch irrevocably” does not make sense. The interface between an ofproto provider and the rest of Open vSwitch, though it is not frozen, has evolved rather than seen drastic changes over Open vSwitch releases. An author of a private ofproto provider should be able to track upstream Open vSwitch changes, not with negligible effort but with a reasonable amount.

http://www.xflowresearch.com/expertise.html

Technical Expertise



xFlow specializes in full SDN stack development:
  • Hypervisor technology like Xen and KVM
  • Network virtualization using emerging tunneling protocols, such as VxLAN, STT, NVGRE, STT, GTP etc.
  • Highly optimized fast path implementations in soft switches
SDN data planes on proprietary platforms:
  • Marvell switch chips: xCAT (24/48x1Gbps) and Lion (24/48x10Gbps) platforms
  • Broadcom switch chips: Ported OVS to Broadcom 563xx and Trident switch chips
  • Octeon-based switching platforms: Ported OVS with significant enhancements to Octeon NIC platforms
  • Optical/photonic switches: Designed an implemented an OpenFlow API extension for Calient Networks’ optical switches
SDN Controller Architectures:
  • Contributors to NOX/POX development
  • Ported NOX to Cavium Octeon processors
SDN Controller Application Development:
  • Expertise in NOX, Beacon, and Floodlight application development
Benchmarking Optimizations such as:
  • Optimized Queuing
  • Advanced Traffic Policing
  • Support for Multiple Lookups
  • Advanced QoS Support
  • Linux kernel optimization using large packet buffers
  • Benchmarking/Profiling


Monday, March 24, 2014

Network Orchestration

I recently came across some solution to orchestrate the network function management / controller.
Some tools for virtualization SDDC  management

http://opencontrail.org/










ZenOSS

Cisco NAM ( Network Access Module )

Thursday, March 20, 2014

DPI as Network Function Vitualization

Network Applications, lots of network today's operations can be an application (NFV)
  • Policy enforcement
  • Security
  • Analytics
  • DPI in L2 to L7 
    • Flow based routing?
  • Unified DPI
    • Policy controlling and SLA enforcement like applications communicated to controller are not yet standardized , non-uniform
    • OpenFlow MUST address this in future to standardize this
  • SDN Layers for DPI
    • Application ( Advanced DPI layer
    • Control 
    • Infrastructure ( simpler DPI to not load the CPU )

http://www.qosmos.com/products/virtualized-dpi-vnfc/

Qosmos DPI as a Virtual Network Function Component (VNFC) complies with an official use case standardized by ETSI in July 2013. This new Qosmos product runs in a virtual machine and uses optimized interface to feed application information and metadata to other integrated components, together forming virtual networking equipment (VNFs) such as Service Routers, GGSN, PCEF, BRAS, ADC/Load Balancers, Network Analytics, NG Firewalls, WAN optimization, etc.
Qosmos DPI VNFC is based on Qosmos’ flagship product ixEngine®, which is already established as the de facto industry-standard DPI engine for developers of telecoms and enterprise solutions. ixEngine identifies and extracts information traveling over networks in real time, providing a true picture of the traffic by identifying protocols, types of application, and extracting additional information in the form of metadata. Equipment makers, telco and enterprise software vendors, and cloud service providers use Qosmos to gain application awareness, accelerate time to market and benefit from continuous signature updates.
In a telecom environment, Qosmos DPI VNFC can optimize a virtual Evolved Packet Core, by feeding real-time traffic information to a virtual PDN-Gateway that can trigger multiple functions such as quality of service (QoS) management or traffic shaping.  This means that operators can optimize costs through improved equipment utilization while securing service level agreements (SLAs) through dynamic and rapid instantiations.


 Benefits


  • Short time-to-market for solution developers
  • Safe implementation: VNFCs de-correlated from DPI software and signature updates
  • Improved flexibility (VM migration)
  • Elastic scalability (up & out)

http://www.qosmos.com/wp-content/uploads/2012/06/schema_xengine-hd.jpg



Who moved my cheese

Whitebox Networking

Whitebox is nothing but a bare metal with high capacity to run any software in top of it for various applications.

As virtualization has gotten up to the extreme of private / public / hybrid cloud and compute, storage and network are hypervizorized( a piece of software).

These flavors of  hypervizor can be sitting on top of these bare metal whitebox for specific function(s).

Replacing network switches / routers or other specific network components into this whitebox based hardware with network software is "whitebox-networking"

Networking has evolved by building specific hardware for
  • speed of data transmission, 
  • power of packet processing, 
  • expanding need for network bandwidth on shrinking hardware space
  • stack of software with bundles of network protocol functions
Past 2010 - with the approach of shrinking hardware , cheaper faster memory, smaller processor with specific functions with cheaper options and availability ( instead of building custom ASIC , FPGA and Network processor - NP).

Hardware and software both have changed there arise a point of convergence and need for technology to control these whitebox (bare metal) hardware with software , and this controller is SDN ( Software Defined Network) controller see opendaylight.org.
Hardware Solutions
http://cumulusnetworks.com/product/overview/



Software Solutions










Interview - Netronome (VMBlog)

Q&A: Interview with Netronome, Talking Growth of SDN and NFV in 2014
The network is becoming sexy again thanks in large part to the growth of the modern data center, virtualization and cloud environments.  And with these dynamic environments comes the need for a dynamic networking system, something that a network virtualization solution can help provide.  So to find out more about SDN and NFV, I reached out to Netronome and spoke with Jarrod Siket, the company's SVP and GM of marketing. 
VMblog:  Give us some background on why Netronome is shifting its focus to SDN and NFV architectures.
Jarrod Siket:  Netronome was founded on the notion of open computing and open networking. Our first reference designs in 2007 were launched as OpenAppliances. Our vision has long-been for open architectures, primarily based on x86 processing, hosting standard Linux-based applications, accelerated by Netronome flow processors. We bring high performance processors, often used in NIC configurations, and software that accelerates these platforms.
Today many trends are now aligning with our early vision. Networks and devices are transitioning from proprietary hardware and closed software to commodity hardware and open software. Cloud and mobile computing are driving the need for a new network with seamless service migration, simplified management, tenant isolation and massive scale out in the data center. Many organizations struggle to run their wide area networks efficiently and SDN eases the work of running large networks by moving complex and proprietary ASICs and APIs to standard interfaces that can manipulate how and where traffic goes in a data center.
At Netronome, we want to bring unmatched performance and programmability to virtualized servers, networks and services. We are focusing on SDN and NFV because the use of open software on standard, high-volume servers and switches can reduce costs, simplify deployments and management, and enable horizontal scale-out - while accelerating innovation.
VMblog:  Please explain the products and set of features that the SDN launch delivers.
Siket:  With the new product launch, we are now offering a suite of FlowNIC PCIe Gen3 cards that scale up to 200 Gbps, in addition to a new FlowEnvironment software package that delivers standards-based APIs and configuration protocols for virtual switch offload and acceleration. Our new FlowNIC family combines the industry's highest performance and port density into a PCIe Gen3 adapter. The cards feature up to four PCIe Gen3 interfaces, as well as, 216 programmable cores to align with the rapid change in SDN standards and protocols. We are thrilled to deliver software that includes standards-compliant support for Open vSwitch (OVS) 2.0, OpenFlow 1.4, Intel DPDK, and network virtualization protocols such as NV-GRE and VXLAN - and that is supported on both existing NFE-32xx acceleration cards and the new FlowNIC-6xxx cards.
VMblog:  How will the new data plane hardware and software improve performance of virtual switches and networking functions?
Siket:  In order to virtualize the network, you need to make every edge device an intelligent touch point while supporting more advanced networking tasks. Our new FlowEnvironment software delivers more than a 20X increase to virtual switching performance, drastically increasing the number of available virtual machine instances per server. The FlowNICs' 216 programmable cores allow them to keep pace with the rapid changes in SDN protocols and standards. When these features are combined, they solve many of the scalability problems seen in virtual switch implementations where high throughput and I/O frequencies are required, while still maintaining the evolution of a software-based edge. This broad applicability to any virtualized server allows the new hardware and software to increase utilization and operating efficiencies.
VMblog:  And how do Netronome's flow processing solutions help customers overcome the limitations of other common switching solutions?
Siket:  Standard servers struggle with inefficient CPUs, NICs that can't offload and high overheads. Our flow processing solutions help customers overcome the limitations of common switching solutions by allowing multiple header fields to process, while improving port density and bandwidth, and maintaining software control and orchestration. This is done by replacing a complex infrastructure with a simplified packet core, surrounded by a software-based, intelligent edge. The intelligent edge directs traffic among virtual machines across a sophisticated overlay network, and still provides network and security services. Netronome's solutions prevent the complex and fluctuating workloads from interfering with application and network performance.
VMblog:  A recent announcement included a number of testimonials from a wide-range of customers. What is a common theme that customers addressed in regards to Netronome's solutions?
Siket:  Many of our customers have addressed how impressed they are with our products meeting even the most challenging network requirements. Unlike competitors, our acceleration cards and software allow standard servers to excel in next-generation data centers, giving our customers a solution that can easily extend across multiple locations. Our products simply work in standard server designs with standard Linux applications, and provide significant increases to network throughput while reducing CPU utilization for the applications. By launching the Netronome SDN and NFV product-line, we are creating new opportunities for customers to broaden the reach of their appliance solutions while benefiting from high-performance within today's traffic-heavy networks.
VMblog:  Finally, how will Netronome look to build off this major company announcement in the coming months?
Siket:  Releasing our SDN product line is a significant step into what we believe to be the future of networking. This transformation will only accelerate as companies recognize the benefits of SDN and NFV, and we have now positioned the company to be instrumental in how OEMs, ODMs and end-users construct their networks moving forward. 2014 has already been a memorable year for Netronome, and as more companies adopt SDN and NFV architectures we're excited to see where it takes us.
##

Tuesday, March 11, 2014

DPI (Deep Packet Inspection) and DCI (Deep Content Inspection)

Dissecting DPI / DCI (Deep content inspection) and available options

http://en.wikipedia.org/wiki/Deep_packet_inspection

In the computer network L2 to L7 are all  various information floating around a link, various security applications providing intrusion prevention and intrusion detection (IP/ID)

I came across various solution in the past , each product specializing in it own way and does one better than other.

In an ideal network world monitoring and reporting are in done in two segments on a same network pipe(link)

Monitoring on a 1/10/40/100G interface in-line is possible only with a hardware based solution.

  • Filtering traffic on-the-fly is possible here from the same hardware, which decides what traffic to be passed-thru based on a policy rule
  • Trigger to apply policy to change the QOS or traffic type 

Reporting of statistics can be feed into multiple software based appliances (tools).

High-ranking websites blocked in mainland China using Deep Packet Inspection
Alexa Rank Website Domain URL Category Primary language
6 Wikipedia wikipedia.org www.wikipedia.org Censorship-Free Encyclopedia English
1 Google google.com www.google.com World-wide Internet Search Engine English
1 Google Encrypted google.com encrypted.google.com Search English
2 Facebook facebook.com www.facebook.com Social network English
3 YouTube youtube.com www.youtube.com Video English
24693 OpenVPN openvpn.net www.openvpn.net Avoid political internet censorship English
33553 Strong VPN strongvpn.com www.strongvpn.com Avoid political internet censorship English
78873 Falun Dafa falundafa.org www.falundafa.org Spiritual English
1413995 VPN Coupons vpncoupons.com www.vpncoupons.com Avoid political internet censorship English
2761652 ElephantVPN elephantvpn.com www.elephantvpn.com Avoid political internet censorship English


IPOQUE
http://www.ipoque.com/en/products/prx-g-series

PRX G-Series
The Next Generation of Network Intelligence
Traffic Management and Policy Enforcement

The PRX G-Series product line is a carrier-class bandwidth management, policy enforcement and network intelligence system. It identifies all traffic on the operator network based on deep packet inspection and provides an extensive suite of capabilities to monitor, manage, and monetize network application traffic.
As a network operator the ipoque PRX G-Series helps you to reduce operational costs of your network as well as to identify and eliminate revenue leakage. Additionally, it allows you to introduce new application-based services models that subscribers are demanding in an all-IP world.

Some deployment scenarios of these products
PACE (Protocol and application classification engine) and PADE (Protocol Application Decoding Engine) are their proprietary software stack (tools) for building application in OSS.










As the world changing towards application aka "apps", there are various newer types of traffic allover in the TCP/IP pipe.

MANAGE AND ENSURE THE RESPONSE TIME & THROUGHPUT
OF EVERY APPLICATION YOU OWN IN EVERY LOCATION YOU RUN IT

AppEnsure delivers an enterprise view of all apps running; legacy, custom and purchased, in all locations; physical, virtual, private and public cloud. This is a dynamic view that will update in real time to show all instances in all locations with each transaction response time. The overall throughput of all instances of an app delivers a deterministic demand load profile, not an inferred one from resource utilization. Manage your performance!

Wednesday, March 5, 2014

Martin Casado - the SDN guy


We all know this guys in the industry, by the name goes ...yes he is the one who did the POC study research what ever about the SDN and how the networks should work in the future


  • He worked on OVS(Open vSwitch) Project , ofcourse open sourced and added to linux
  • He founded a company called Nicira and sold it to visualization boss VMWare for just few billiions added up his product under the name of NSX into VMWare existing ESXi 

Collecting some artifacts on his works, the man to be watched if you are into SDN
http://yuba.stanford.edu/~casado/
http://www.youtube.com/results?search_query=martin+casado+open+source

SDN(Software Defined Network) On a Chip

As the title reads, SDN technology on the chip is getting cooked as of ONS 2014 ( Open Networking Summit ).

Wrapping up the speed notes on ONS 2014, near close diversion on "how , why and what SDN is and where it is going" is uncovered.

A good guess on how the problem is getting materialized, commodotized and vendor neutral etc..
are here , some start-up operations on moving the SDN story into the chip level.
Sounds like it make complete sense because, the general networking solutions based on chip based switch-ing solutions are the one sitting on close to the bare metal. If a common layer of abstraction (SDN open flow based controller ) at this level will open up all the avenues for the vendors to build their networking solutions.

Now, someone has to do the plumbing , some leads are here, below are the list of companies ambitious about introducing SDN Open Flow Controller in to a chip and sell to potential networking vendors.

http://www.broadcom.com/products/Switching/Software-Defined-Networking-Solutions/OF-DPA-Software
Broadcom's OpenFlow Data Plane Abstraction (OF-DPA) software enables development and deployment of scalable and high performance OpenFlow-based Software-Defined Networking applications on widely deployed Broadcom-based switches.

OF-DPA is compliant with the Open Networking Foundation (ONF) OpenFlow v1.3.1 specification. OF-DPA v1.0 defines and implements a hardware abstraction layer that maps the industry-leading StrataXGS switch architecture to the OpenFlow 1.3.1 switch and pipeline.

The OF-DPA specification and API are openly published and provided with turnkey reference implementation on ODM and OCP-compliant switches to enable a community and academia-based development ecosystem. Any OpenFlow v1.3.1 compliant controller and agent can be integrated with OF-DPA to enable popular SDN use cases such as Virtual Tenant Networks, Network Virtualization, Traffic Engineering and Service Chaining.

OF-DPA software is available in two packages:

● An OEM & ODM Development Package (ODP), which is a full source code package distributed under Broadcom SLA.
● A Community Development Package (CDP), which is an Open API library with Application Development Kit distributed on GitHub.

http://www.freescale.com/webapp/sps/site/homepage.jsp?code=VORTIQA
Gain market advantage by leveraging Freescale's cutting-edge, commercial-grade VortiQa Software-Defined Networking Solutions. The two products available, VortiQa open network (ON) director software and VortiQa open network (ON) switch software, leverage open standards such as OpenFlow(TM) protocols to improve manageability of networks. By leveraging these optimized and highly portable software products on multicore platforms, customers can reduce OPEX and CAPEX. Develop your next innovative design with VortiQa SDN Solutions!
        
http://www.xpliant.com/
Xpliant was founded by former Marvell employees including Tsahi Daniel and Sachin Gandhi, Xpliant’s CTO and COO, respectively. Founded around 2011, the company has been participating in the Open Networking Foundation (ONF), and Daniel serves on the recently formed Chipmakers’ Advisory Board. Even though it’s in stealth mode, Xpliant hasn’t exactly been invisible.

http://www.barefootnetworks.com/
Barefoot, meanwhile, has its roots in Texas Instruments. Martin Izzard, who’s reportedly running Barefoot, spent more than 20 years at TI before leaving in mid-2013, according to his LinkedIn profile. Assuming Barefoot is this company, it ran an online contest to design its logo. At least they’re having fun at this.

Tuesday, March 4, 2014

Virtual network testing tool

Source : https://supportforums.cisco.com/docs/DOC-26261

Created on: Jul 27, 2012 12:08 PM by tokunath - Last Modified:  Aug 16, 2012 4:14 PM by Lisa Latour

Chalk Talk -  Virtualized Network Testing Tools – Use Cases and Deployment Guidance

VERSION 6  Click to view document history

Introduction
Server and storage virtualization is a hot trend gaining momentum in nearly all industries with its promise of reducing the total cost of ownership in the datacenter. Running multiple virtual machines (VMs) on a single server is not a new concept, but the levels that are now possible open the door to endless use cases and opportunities for scale. This generates both excitement and concern for datacenter operational teams that must ensure critical business applications running on virtual server and storage platforms perform reliably and remain available at all times. Network infrastructure virtualization solutions are also beginning to crop up in datacenters, as VM considerations such as mobility, access, high availability and security must be dealt with. Network architects are evaluating and deploying many of these new technologies including virtual access switches, virtual routers, and virtual security appliances.

Most IT organizations require that some level of certification testing be completed prior to deploying any new datacenter system or design in order to prevent outages that may impact revenue. Most out-of-service, or lab testing efforts involve building a prototype of the proposed network design and then using test tools to generate simulated application traffic on the network under test while it is subjected to stresses such as simulated failures or high levels of network traffic and transactions. As network engineers and testers begin to peel away the various layers of complexity with these virtualized datacenter designs, they will soon come to realize that their legacy testing tools will likely come up short in their ability to test end-to-end solutions, and that the only way to truly validate virtual networks designs is with virtual tools.

VM-Based Test Tool Use Cases
Test methodologies for conformance, functionality and performance testing of network systems and devices have not drastically changed over the years, despite test tools becoming more sophisticated with their ability to simulate applications and quantify a users “quality of experience”. The RFC 2544 standard, for example, was established by the Internet Engineering Task Force (IETF) in 1999, and is still considered the de facto methodology for benchmarking performance metrics of network systems. This RFC provides an out-of-service benchmarking methodology using throughput, back-to-back, frame loss and latency tests, with each test validating a specific part of an SLA. The methodology defines the frame size, test duration and number of test iterations. Network Engineers familiar with this methodology will immediately recognize the challenges applying it to the virtual world. For example, how would you go about benchmarking the maximum no drop rate of a Nexus 1000V software switch that has no physical ports to plug a traffic generator into? How would you gauge database replication performance between two VMs that reside on the same host? Will web performance between two VMs communicating through a virtual firewall match that of a physical device during the busy transaction hours? Network Engineers are realizing that the only way to conduct these types of out-of-service benchmarking tests are with software-based test tools that can reside on a virtual machine, allowing them deepest visibility into the virtual datacenter infrastructure.

Figures 1-4 below illustrate some functional use cases for VM based test tools that can be leveraged to validate the functionality, conformance, baseline performance and security of a virtualized datacenter.

The first test case shown in figure 1 shows how virtual test ports installed directly on a hypervisor can be used to measure VM to VM performance by sending test traffic between test port VMs that reside directly on a host under test. This testing can be limited to intra-chassis VM performance, or extended to inter-chassis and network performance testing by deploying ports on different hosts.

The second test case shown in figure 2 illustrates how a vSwitch such as the Nexus 1000V can be evaluated for performance, scalability and switch feature conformance in accordance with RFC 2889. A large number of VM-based test ports would be utilized to setup various test flows, including unicast and multicast as called for by the particular design requirements.

The third test case shown in figure 3 presents a “Cloud datacenter” design, where the Cisco Virtual Security Gateway (VSG) is leveraged to separate a VM deployment into “zones” so that zone-based firewall rules can be applied for inter-VM communications. An ASA 1000V Cloud firewall is positioned at the edge of each zone to secure the cloud perimeter against network-based attacks. By deploying a combination of VM-based test ports on the hosts, and physical test ports on the network, it is possible to validate functionality as well as conducting the standardized RFC 2647 “Benchmarking Terminology for Firewall Performance” test suites to thoroughly evaluate the performance of the virtual firewalls.

The final example in figure 4 presents a use case to validate the loss incurred during a live VMWare (VM and/or Storage) migration from a primary to secondary datacenter. In this example, test traffic sourced or destined to a VM-based traffic generator would incur loss during VM VMotion, and the duration of this loss could be used as a benchmark for calculating the effect on user applications.

imag1.jpg
Figures 1-4: Use Cases for Virtual Network Test Tools


Spirent TestCenter Virtual (STCv)
Spirent Communications http://www.spirent.com/ is one of the leading vendors in the test tool industry, providing network and application test tools used by Enterprise, Service Provider, Government, and Network Equipment manufacturers. Spirent was one of the first vendors to develop a test solution that allows VM-hosted test ports to be controlled by a common GUI and API that is also used to control its chassis based test systems. Test engineers familiar with Spirent TestCenter will find Spirent TestCenter Virtual (STCv) to have the exact same look and feel, where the VM-based test ports appear as dedicated test chassis with a single port installed. All of the standard conformance, performance and functional tests that are supported on physical STC ports are also supported on virtual (STCv) ports.

Spirent TestCenter Virtual (STCv) Test Components
This section describes the various components of STCv, and the infrastructure elements required to host, control, and manage it in a test topology.

  • Spirent TestCenter VMWare VM. Each Spirent TestCenter Virtual port is a single VMware (Linux) Virtual Machine. There are two versions of Virtual TestCenter VMs, a 10G version and a 100M version, each requiring separate licenses, purchased from Spirent. The 10G versions are normally used for performance testing, where the 100M versions used for functionality testing.

  • Spirent TestCenter GUI: The Windows GUI application that is used to control Spirent TestCenter (virtual and physical) test ports. This application can reside on a physical server or in a VM, so long as there is IP connectivity to the management VLAN that the STCv virtual machine is connected to.

  • License Server: As a part of the Spirent VM purchase, the user will receive a 1RU license server. This server runs Montavista Linux, and works to supply the individual VMs a license file or license “seat”. A Spirent SE or other support representative will install the required license files to this server. The only requirement is that a “management NIC” in the Test Center VM be able communicate to the IP address of the license server.

  • ESX Host: For Virtualized datacenter Testing it is oftent necessary to deploy the STC VMs on the ESX host under test. For STCv 10G performance type tests (ie: virtual switch throughput, intra-host VM testing, virtual appliance testing) a chassis-based Blade Server (Such as a Cisco UCS 5108) with adequate CPU and uplink capabilities would be necessary. For feature and functionality tests that do not require a high volume of “bit blasting”, a rack based server will normally provide adequate CPU and forwarding throughput to meet testing needs..

  • Management switch: An out of band management switch is normally be used to access the management VLAN/vNIC of the host to provide out of band management control and license authorization from the Spirent TestCenter GUI and license server.

  • VLAN Fan out switch: (optional). It is often useful to consolidate Virtual Test Ports onto a centralized Layer 2 switch that can be used as a centralized point of connectivity for Physical network devices under test. Care should be taken to ensure adequate capacity on this switch so that it does not become a bottleneck during performance testing.

img2.jpg
Figure 5: Example Deployment of a Spirent TestCenter Virtual Testing Solution

Deploying Spirent TestCenter Virtual on Cisco UCS
The following steps will help guide you through an STCv deployment on Cisco UCS.


  1. Purchase the appropriate STCv license and required number of seats needed for your testing requirements. The options to choose from include 10G (performance) or 100M (functionality) test port licenses.
  2. Install license files on a license server that will have IP reachability to the management vNIC of the STCv Virtual Machines. (This task is typically completed by a Spirent sales or support team systems engineer)
  3. Download the VM files from Spirent (www.spirent.com). Select the link(s) for “Spirent TestCenter Virtual 10G/100M for VMware, and download the FW image.
  4. Add the TestCenter Virtual files to ESX host’s datastore and Add fto the VMWare vCenter Inventory
  5. Connect Network adapters to virtual switches.
  6. Each Virtual TestCenter VM has two Network adapters. One adapter is for the management connection and should be connected to a management VLAN having access to the license server. The other network adapter is the 100M or 10G adapter and should be connected to the appropriate VLAN in the test topology (Test Insertion Point). Click OK
  7. (Optimization) Set CPU affinity for the TestCenter VM
  8. Each Virtual TestCenter VM uses three virtual CPUs. For better performance each virtual CPU should be set with affinity to its own logical CPU
  9. Configure TestCenter VM management ports
  10. Configuration for the Virtual Test Center VMs will be via the VMware Console. Login to the Virtual TestCenter VM console and configure an IP address/mask/gateway, license server and NTP server from the Linux cli.
  11. Add the Virtual TestCenter Ports to Spirent Test Center Application
  12. Virtual TestCenter ports can only be used with Spirent Test enter version 3.34 or later. Each Virtual TestCenter port will appear to be its own chassis.
  13. Define the test VLANs on UCS and trunk them up to the VLAN fanout switch. Connect the network devices where STCv ports are required to access ports mapped into the appropriate VLANS. Refer to figure 5 above for an illustration.


Tom Kunath, CCIE No. 1679, is a Solutions Architect in Cisco Systems Enhanced Customer Aligned Test Services (eCATS) Team. With nearly 20 years as a consultant in the networking industry, Tom has helped design, test and deploy some of the largest Enterprise, Financial, Service Provider and Federal Government customer networks. In addition to his CCIE, Tom holds a Bachelor of Science degree in Electrical Engineering and industry certifications from Juniper and Nortel Networks.

tom.png

Bookmarks

Blogs

http://vmblog.com/home.aspx
http://www.colinmcnamara.com/
http://blog.cimicorp.com/
http://narendradhara.wordpress.com/2013/12/
http://blog.scottlowe.org/
http://itechthereforeiam.com


Github
http://www.github.com/CiscoSystems
http://www.github.com/DATACENTER
https://github.com/beeyeas/workspace-code
http://pcottle.github.io/learnGitBranching/

To Be Organized







Netronome Launches Data Plane Hardware and Software for SDN and NFV Designs

Netronome Launches Data Plane Hardware and Software for SDN and NFV Designs
Netronome, the leading provider of data plane processing solutions for software-defined networks (SDN), today announced a platform architecture to augment virtual switch implementations with hardware acceleration NICs in standard servers for SDN and network functions virtualization (NFV) designs. The new products include a suite of FlowNIC PCIe Gen3 cards that scale up to 200 Gbps, along with a new FlowEnvironment software package that provides standards-based APIs and configuration protocols for virtual switch offload and acceleration.
"Netronome solves the scalability problem of virtual switch implementations in the intelligent network locations where the highest throughput and I/O densities are required, while maintaining the rapid evolution of a software-based edge,” said Niel Viljoen, founder and CEO, of Netronome. “With broad applicability to any virtualized server, the new products are optimized for use in servers running network and security applications, such as SDN middleboxes, SDN gateways and NFV appliances.”
Hyperscale data centers are leading a revolution that is migrating into service provider and enterprise network designs. A complex infrastructure is replaced with a simplified packet core built using merchant switch silicon, surrounded by a new, software-based, intelligent edge. The intelligent edge is comprised of standard servers that are responsible for hosting business applications, and also providing network and security services, while simultaneously directing traffic among virtual machines across a sophisticated overlay network. These network functions are implemented within a virtual switch and consume valuable processing resources. Consequently the complex and varying workloads stifle both application and networking performance.
“NTT Communications has been at the forefront of advancing SDN technologies with commercial use for our cloud computing, datacenter and network services for many years,” said Mr. Yukio Ito, director, member of the board, and senior vice president of service infrastructure at NTT Communications. “We’re evaluating Netronome’s solution of processors, software and NICs and expect to use them to extend our SDN offering into the Cloud-VPN Gateway with close collaboration with NTT Innovation Institute, Inc.”
“FirePower platforms have routinely set industry benchmarks for performance and security effectiveness in data center deployments,” said Tom Ashoff, vice president of engineering, Cisco Systems. “Netronome flow processors and software provide us the feature set and programmability to continue to keep pace with the rapid changes in these hyperscale networks.”
The FlowNIC-6xxx family packs the industry’s highest performance and port density into a PCIe Gen3 adapter, including 2x40, 4x40, 1x100 and 2x100 gigabit Ethernet options. The cards feature up to four PCIe Gen3 x8 interfaces, delivering unmatched host bandwidth to standard single, dual and quad socket servers. The cards feature 216 programmable cores to keep pace with the rapid change in SDN protocols and standards. Additional hardware accelerators are provided for cryptography, nanosecond accuracy time-stamping, SR-IOV, VMDq, and RoCE. Massive on-chip and on-board memories deliver 24M flow table entries, and 128K wildcard rules to satisfy the requirements of the most demanding SDN and NFV applications.
Netronome’s new FlowEnvironment software delivers more than a 20X increase to virtual switching performance and significantly increases the number of virtual machine instances available per server. The FlowEnvironment includes standards-compliant support for Open vSwitch (OVS) 2.0, OpenFlow 1.4, Intel DPDK, and network virtualization protocols such as NV-GRE and VXLAN. The production-ready software provides standard APIs and is fully supported across Netronome’s FlowProcessors that scale up to 400 Gbps. The software is supported on both existing NFE-32xx acceleration cards and the new FlowNIC-6xxx cards.
Netronome’s processors, software and FlowNICs are available to OEMs, ODMs and hyperscale network operators. Complete solutions can be purchased direct from Netronome, and through a premier set of partners and suppliers.
“We are pleased that so many customers and partners have chosen Netronome’s processors, software and NICs to help fulfill their vision for SDN and NFV,” said Jarrod Siket, senior vice president of marketing, at Netronome. “This complete solution benefits our customers by enabling their standard servers to reach line rate network performance while returning valuable, and previously wasted, compute resources to the applications and services that need them most.”

Published Tuesday, March 04, 2014 7:02 PM by David Marshall
Filed under:



Netronome Systems Inc.


COMPANY: Netronome Systems Inc.
MARKET SEGMENT: Layer 4-7 traffic management (now) and application-aware network processors (soon)
LOCATION: Pittsburgh -- although the Website refers to Cranberry Township, Pa., dangerously close to Scranton
HEADCOUNT: 79, expected to be 94 by January, and 125 by mid-2008
FORMED: 2003
FINANCING: $28 million from 3i Group plc , Spinner Asset Management LLC , Top Technology Ventures Ltd. , and Tudor Investment Corp. Investors are adding another round to go with the Intel Corp. (Nasdaq: INTC) deal (see below).
WHAT IT DOES: Netronome sells cards and systems for Layer 4-7 processing functions, targeting applications delivery. A licensing deal for the Intel IXP2800 chip line means Netronome will also start designing and selling its own chips, which will include processing for Layers 2 through 7.
CUSTOMERS: None announced. Netronome says it's got design wins, but its customers' products haven't come to market yet.
PARTNERSHIPS: None, other than the Intel licensing deal.
COMPETITORS: Cavium Networks Inc. (Nasdaq: CAVM), Freescale Semiconductor Inc. , Raza Microelectronics Inc. , and ultimately Intel.
On the systems side, Netronome competes with the likes of Bivio Networks Inc. and CloudShield Technologies Inc.
MANAGEMENT SNAPSHOT:

  • Niel Viljoen, CEO – Viljoen was part of the Fore Systems crew acquired into Marconi. His name also got caught in the shareholder wrath over shares awarded to ex-Fore execs. (See Fraud Case Hangs Over Fore Fat Cats.) He left Marconi circa 2001 and spent a couple years as an angel investor in outfits like Intune Networks .
  • Derek McAuley, CTO – Another Fore veteran, McAuley also spent time with Cisco after its acquisition of Atlantech in 2000. Earlier, he'd been the founding director of an Intel lab at Cambridge in the U.K. and had a similar position with Microsoft Corp. (Nasdaq: MSFT) -- but his Ph.D. is in ATM networking.


  • Jim Finnegan, senior VP of engineering – Brought into Intel through its acquisition of Basis Communications, Finnegan eventually oversaw the development of Intel's network processor lines. Finnegan joined Netronome before the Intel deal, but his experience becomes an obvious asset now that the IXP2800 is back in his hands.
    WHAT TO WATCH: The IXP2800 should expand Netronome's market, and it brings in some revenues. Meanwhile, Netronome's original business of boards and systems is just starting to rustle; customer announcements could help indicate how much of a dent it's made.
    RECENT NEWS:

    — Craig Matsumoto, West Coast Editor, Light Reading






  • Blue Coat Acquires Netronome SSL Technology to Extend Leadership in Enterprise Security

    Acquisition to Deliver SSL Traffic Visibility to IDS/IPS, Advanced Malware and Forensic Solutions in the Blue Coat Partner Ecosystem

    Thursday, May 9, 2013
    SUNNYVALE, Calif., May 9, 2013 – Blue Coat Systems, Inc., a market leader in Web securityand WAN optimization, today announced that it has acquired the SSL appliance product line from Netronome, a leader in flow processing. The Netronome SSL appliances enhance Blue Coat’s ability to enable a safe and productive Internet experience for enterprise organizations.
    Web and SSL traffic can account for more than half of all traffic on the corporate network, driven by the adoption of cloud-based SaaS applications and the growing use of HTTPS for Web sites such as Google and Facebook. This type of encrypted traffic, left uninspected, can hide threats or other malicious activities. To effectively implement security protections before the traffic reaches the endpoint, it is important for enterprises to have in-network visibility into encrypted traffic.
    “Many networks face a crisis today as the rise in SSL-based traffic creates an unrealistic reliance on the endpoint to detect threats or data loss,” said Greg Clark, CEO at Blue Coat Systems. “By analyzing SSL traffic within the network, corporations substantially reduce risk.”
    The market-leading Netronome SSL appliances deliver SSL decryption in networks ranging from 100 Mbps to 10 Gbps full duplex, providing enterprises with visibility into the traffic on their network. As a fully transparent proxy, the SSL appliances seamlessly integrate into existing security environments to protect infrastructure investments while mitigating costly changes to network topology or client re-configuration. The appliances also provide up to four data feeds to a wide range of in-network security applications, such as intrusion prevention, intrusion detection, sandboxing and forensics, which can then analyze the data for threats or data breaches.
    “Increasingly, analytics is an important tool for analyzing advanced threats and avoiding data loss,” said Christian Christiansen, vice president of security products and services at IDC. “To be effective at analytics, enterprises need to inspect traffic that flows through their network. SSL is one of the more important traffic flows. Blue Coat’s acquisition of the Netronome technology enables customers to add this analytics capability by providing inspection capabilities at scale.”
    Once the Netronome SSL appliances are fully integrated with existing Blue Coat products, enterprises will be able to apply consistent policy to all traffic on the network. 
    “The SSL appliances are the industry’s highest performance transparent proxy for SSL network communications and provide unique visibility exposing inbound threats and outbound leaks,” said Howard Bubb, chief executive officer at Netronome. “The product is an ideal complement to Blue Coat’s comprehensive suite of security products and extends their leadership position in securing SSL communications.”
    The Netronome appliances are currently available through certified OEM partners, including Sourcefire and VSS Monitoring, and will be available through authorized Blue Coat channel partners shortly. As part of the acquisition, the existing product team will join Blue Coat to provide uninterrupted support to existing customers and OEM partners.
    “Netronome’s SSL technology has played a valuable role in helping us to realize our Agile Security® vision,” said Tom McDonough, president and COO of Sourcefire. “Blue Coat's acquisition of the technology will broaden our existing technology partnership and enhance global support benefits to our customer base.”

    About Blue Coat Systems

    Blue Coat Systems is a leading provider of Web security and WAN optimization solutions. Blue Coat offers solutions that provide the visibility, acceleration and security required to optimize and secure the flow of information to any user, on any network, anywhere. This application intelligence enables enterprises to tightly align network investments with business requirements, speed decision making and secure business applications for long-term competitive advantage. Blue Coat also offers service provider solutions for managed security and WAN optimization, as well as carrier-grade caching solutions to save on bandwidth and enhance the end-user Web experience. For additional information, please visit bluecoat.com.
    Blue Coat, ProxySG and the Blue Coat logo are registered trademarks or trademarks of Blue Coat Systems, Inc. and/or its affiliates in the United States and certain other countries. All other trademarks mentioned in this document are the property of their respective owners.

    Media Contacts

    Jennifer Ruzicka
    Blue Coat Systems
    jennifer.ruzicka@bluecoat.com
    408-541-3330
    Dave Bowker
    Schwartz MSL
    bluecoat@schwartzmsl.com
    781-684-0770

    Test bed on the cloud -1

    Lately i was wondering why test automation cannot be done in a hosted public cloud / private cloud?
    Like

    • Network equipment 
    • Throughput / utilization 
    • Firewall / gateway
    • DNS
    • Load balancing 
    • Content Distribution Network

    Digging deeper i found answers and some ready made solutions , ofcourse we all need a custom solution because each problem is custom / unique.

    Answers are here ...
    TeraVM

    TeraVM can be easily deployed for testing in the Amazon Elastic Compute Cloud (EC2)
    Milpitas, CA, November 18, 2013 – Shenick Network Systems, provider of per-flow virtualized IP test and measurement solutions, today announced that TeraVM version 10.6 is generally available and supports Amazon EC2 deployment.
    TeraVM is widely used by network equipment manufacturers and service providers around the world for their IP testing needs. Because TeraVM is a 100% virtualized test solution it can be easily deployed in a virtual test bed or cloud environment such as Amazon Elastic Compute Cloud (EC2). With EC2 support, the flexibility of TeraVM increases considerably because customers now have the option of testing their virtual network functions in the Amazon cloud or on premises.
    With release 10.6, TeraVM is available in the native Amazon Machine Image (AMI) format and is easily deployed using the Amazon VPC (Virtual Private Cloud) service. An Amazon VPC allows Shenick customers to provision a logically isolated section of the Amazon cloud which can then be used to test features and functionality of their virtual networking offerings such as a firewall, CDN or load balancer.
    "We continually strive to make TeraVM as flexible as possible from a deployment environment perspective", said Ameet Dhillon, VP of Marketing and Business Development, Shenick. "Amazon EC2 is clearly one of the most popular public cloud offerings so our recently announced support is another significant step down the path to providing our customers as many deployment options for TeraVM as possible."

    About Shenick Network Systems
    Shenick Network Systems delivers IP test and measurement solutions for today’s virtualized and physical network infrastructure. The company's flagship product TeraVM™ is a fully virtualized solution that emulates and measures terabits of stateful application traffic with the ability to easily pinpoint and isolate problem flows. TeraVM supports all major hypervisors and can be deployed on industry-standard hardware offering a uniquely cost effective method to test. TeraVM helps service providers and network equipment manufacturers analyze the performance limitations and capabilities of a wide variety of security and networking devices including VPN/Firewall, vSwitch, DPI or IPS/IDS, vLoad Balancer and video infrastructure. Established in 2000, Shenick has over 150 customers worldwide and is based in Milpitas, California and Dublin Ireland.






    Appvance 

    An app testing solution on the cloud - PushToTest Becomes Appvance
    New name reflects focus on advancing apps and protecting brands.

    Our new name Appvance reflects how impactful our platform is in advancing the quality of social/mobile apps for our clients

    Campbell, CA (PRWEB) March 18, 2013
    PushToTest Inc., maker of the world's most used app performance and validation platform, today announced it has become Appvance Inc. The company will continue to focus on helping enterprise customers improve app performance under stress and load.
    "Appvance is being launched on a foundation of 10 years of work with developers and large corporations such as Best Buy, PepsiCo and Deloitte into a standard methodology that works for any organization while they build apps, deliver apps, and as their apps go viral,” said Kevin Surace, Appvance CEO. “Our growing enterprise customer base requires world-class app validation and top-notch support. To reflect this focus, we have also named our enterprise-class offering ‘Appvance Enterprise.’"
    “I am delighted with our continued growth as our customers depend on Appvance solutions to achieve 5-star app ratings by doing more than just testing for performance,” said Frank Cohen, Appvance founder and CTO. “Modern apps run from distributed cache environments like Akamai, connect to dozens of unrelated Web services, and present risk to your brand when the app's end-user experience fails to meet expectations. Only Appvance gives business managers and their IT teams a way to stay ahead of performance changes as app usage increases across all platforms.”
    Appvance is much more than simple performance evaluation. Appvance delivers app development and deployment methodology, techniques, and platforms to apply to each type of app (Web apps, Ajax, Social/Mobile apps, SOA, Web services using SOAP/REST, and BPM/ESB), a methodology to evaluate business risk and efficiency while automating the orchestration of treat tests (test code, scripts, deployment) as code and use version control to control and share test assets, a means to repurpose a single test for functional tests, load and performance testing, and production monitors, and an Agile Software Development Lifecycle (SDLC) methodology for modern app development to deliver test management to geographically distributed teams.
    More information is available at Appvance.com.
    About Appvance Inc.
    Appvance is an enterprise software company providing products and services including app performance validation infrastructure, development, process engineering, business risk analysis methodology, quality test orchestration methodology, and training. IT managers, DevOps, and brand managers use Appvance Enterprise to ensure their apps will function as promised, to encourage usage and protect brand image. Appvance Enterprise offers end-to-end app performance verification, including Web apps, Ajax, Social/Mobile apps, SOA, Web services using SOAP/REST, and BPM/ESB, for organizations of all sizes. While fully compatible with industry standard functional test scripting languages, including Selenium and soapUI, appvance's patent pending uxAvatar technology replaces scripting with recordings of user actions, simplifying proof-of-performance and the entire app testing process.
    Appvance, Appvance Enterprise, uxAvatar, PushToTest, and TestMaker are trademarks of Appvance Inc.

    Spirent
    http://www.spirent.com/Ethernet_Testing/Platforms/Virtual

    bee-social