Tuesday, December 23, 2014

GRE Tunnel in OVS

Quick command references to set up GRE tunnel with OVS 


/*Add a bridge br0*/
sudo ovs-vsctl add-br br0

/*Add a temp internal interface to connect to guest VM*/
sudo ovs-vsctl add-port br0 tep0 -- set interface tep0 type=internal



/*set new subnet IP for guest to reach other end of GRE tun*/
sudo ifconfig tep0 192.168.200.20 netmask 255.255.255.0


/*create a GRE(Genric Encapsulation) tunnel interface */
sudo ovs-vsctl add-port br2 gre0 -- set interface gre0 type=gre options:remote_ip=<GRE tunnel endpoint on other hypervisor>

#sudo ovs-vsctl show
3fb59768-6943-496f-8963-2cd2e2935075
    Bridge "br0"
        Port "br0"
            Interface "br0"
                type: internal
        Port "tep0"
            Interface "tep0"
                type: internal
    ovs_version: "2.0.2"

Monday, December 22, 2014

2015 Server Giants and NGN (Next Gen.Networks) - CIMI Blog

Posted: 22 Dec 2014 05:23 AM PST
Next year is going to be pivotal in telecom, because it’s almost surely going to set the stage for the first real evolution of network infrastructure we’ve had since IP convergence twenty years ago.  We’re moving to the “next-generation network” everyone has talked about, but with two differences.  First, this is for real, not just media hype.  Second, this “network” will be notable not for new network technology but for the introduction of non-network technology—software and servers—into networking missions.
Today we begin our review of the network players of 2015 and beyond, focusing on how these companies are likely to fare in the transition to what’s popularly called “NGN”.  As I said in my blog of December 19th, I’m going to begin with the players with the most to gain, the group from which the new powerhouse will emerge if NGN evolution does happen.  That group is the server vendors, and it includes (in alphabetical order) Cisco, Dell, HP, IBM, and Oracle.
The big advantage this group has is that they can expect to make money from any network architecture that relies on hosted functionality.  While it’s often overlooked as a factor in determining market leadership during periods of change, one of the greatest assets a vendor can have is a justification to endure through a long sales cycle.  Salespeople don’t work for free, and companies can’t focus their sales effort on things that aren’t going to add to their profits.  When you have a complicated transformation to drive, you have to be able to make a buck out of the effort.
The challenge that SDN and NFV have posed for the server giants is that the servers that are the financial heart of the SDN/NFV future are part of the plumbing.  “NFVI” or NFV Infrastructure is just what you run management, orchestration, and virtual network functions on.  It’s MANO and VNFs that build the operators’ business case.  So do these server players try to push their own MANO/VNF solutions and risk limiting their participation in the server-hosted future to those wins they get?  Do they sit back to try to maximize their NFVI opportunity and risk not being a part of any of the early deals because they can’t drive the business case?
The vendor who’s taken the “push-for-the-business-case” route most emphatically is HP, whose OpenNFV architecture is the most functionally complete and now is largely delivered as promised.  A good part of HP’s aggression here may be due to the fact that they’re the only player whose NFV efforts are led by a cloud executive, effectively bringing the two initiatives together.  HP also has a partner ecosystem that’s actually enthusiastic and dedicated, not just hanging around to get some good ink.  HP is absolutely a player who could build a business case for NFV, and their OpenDaylight and OpenStack support means they could extend the NGN umbrella over all three of our revolutions—the cloud, SDN, and NFV.  They are also virtually unique in the industry in offering support for legacy infrastructure in their MANO (Director) product.
Their biggest risk is their biggest strength—the scope of what they can do.  You need to have impeccable positioning and extraordinary collateral to make something like NFV, SDN, or cloud infrastructure a practical sell.  Otherwise you ask your sales force to drag people from disinterested spectators to committed customers on their own, which doesn’t happen.  NGN is the classic elephant behind a screen, but it’s a really big elephant with an unprecedentedly complicated anatomy to grope.  Given that they have all the cards to drive the market right now, their biggest risk is delay that gives others a chance to catch up.  Confusion in the buyer space could do that, so HP is committed (whether they know it or not) to being the go-to people on NGN, in order to win it.
The vendors who seem to represent the “sit-back” view?  Everyone else, at this point, but for different reasons.
Cisco’s challenge is that all of the new network technologies are looking like less-than-zero-sum games in a capital spending sense.  As the market leader in IP and Ethernet technologies, Cisco is likely to lose at least as much in network equipment as it could hope to gain in servers.  Certainly they’d need a superb strategy to realize opex efficiency and service agility to moderate their risks, and Cisco has never been a strategic leader—they like “fast-followership” as an approach.
Dell seems to have made an affirmative choice to be an NFVI leader, hoping to be the fair arms merchant and not a combatant in making the business case for NGN.  This may sound like a wimpy choice, but as I’ve noted many times NGN transformation is very complicated.  Dell may reason that a non-network vendor has little chance in driving this evolution, and that if they fielded their own solutions they’d be on the outs with all the network vendors who push evolution along.  Their risks are obvious—the miss the early market and miss chances to differentiate themselves on features.
IBM’s position in NFV is the most ambiguous of any of the giants.  They are clearly expanding their cloud focus, but they sold off their x86 server business to Lenovo and now have far less to gain from the NGN transformation than any of the others in this category.  Their cloud orchestration tools are a strong starting point for a good NFV MANO solution, but they don’t seem interested in promoting the connection.  It’s hard to see why they’d hang back this long and suddenly get religion, and so their position may well stay ambiguous in 2015.
Oracle has, like HP, announced a full-suite NFV strategy, but they’ve yet to deliver on the major MANO element and their commitment doesn’t seem as fervent to me.  Recall that Oracle was criticized for pooh-poohing the cloud, then jumping in when it was clear that there was opportunity to be had.  I think they’re likely doing that with SDN, NFV, and NGN.  What I think makes this strategy a bit less sensible is that Oracle’s server business could benefit hugely from dominance in NFV.  In fact, carrier cloud and NFV could single-handedly propel Oracle into second place in the market (they recently slipped beyond Cisco).  It’s not clear whether Oracle is still waiting for the sign of NFV success, or will jump off their new positioning to make a go at market leadership.
I’m not a fan of the “wait-and-hope” school of marketing, I confess.  That makes me perhaps a secret supporter of action which makes me sympathetic to HP’s approach more than those of the others in this group.  Objectively, I can’t see how anyone can hope to succeed in an equipment market whose explicit goal is to support commodity devices, except on price and with much pain.  If you don’t want stinking margins you want feature differentiation, and features are attributes of higher layers of SDN and NFV and the cloud.  If those differentiating features are out there, only aggression is going to get to them.  If they’re available in 2015 then only naked aggression will do.  So while I think HP is the lead player now even they’ll have to be on top of their game to get the most from NGN evolution.

Wednesday, December 17, 2014

Search - Splunk, Elasticsearch , Log Management - Sumologic , logstash

Elasticsearch says it's on a mission to make massive amounts of data available to businesses. Toward that end, in February the company debuted release 1.0 of its Elasticsearch search and analytics software for working with big data. The software is part of the company's open-source ELK stack that includes Elasticsearch, Logstash (log management) and Kibana (log visualization) software.
Elasticsearch is competing with Solr, the open-source search technology from the Apache Software Foundation. Many reviewers, users and solution providers give both technologies high marks. So it will be interesting to see if both gain traction, or if one or the other wins out.


  • Logstash is an agent to collect and mine logs : https://github.com/logstash-plugins
  • Also see sumologic where they claim they have their  log management agent in AWS



Risk Focus is an expert implementation partner of Splunk in the financial services arena and a big proponent of the value Splunk delivers.  We have implemented Splunk in numerous banks, funds and clearing/exchange firms across multiple mission-critical use cases.
Yet Splunk is not the only log analyzer out there.  In particular, ELK (elasticsearch + logstash + kibana) is a growing open source offering which purports to have the same functionality as Splunk at virtually no cost.
Is this a valid claim? Do Splunk and ELK provide similar functionality, stability and technical richness needed by corporate institutions like banks, asset managers, hedge funds, exchanges, industry utilities and major technology providers?

Cost of Splunk vs. ELK

Let’s start by taking a look at the overall cost of investing in Splunk versus ELK.  Corporate buyers invariably look at total cost of ownership (TCO) when making their buying decision because they want to know the true “all in cost” not just the up front fee.
“Splunk is expensive and ELK is free.”
A web search will turn up hundreds of blog entries claiming Splunk’s pay-per-gigabyte-indexed pricing model is expensive. Splunk data indexing charges sound pricey, but the way the pricing actually works is far cheaper than it first appears. Yes, an ELK license is free but Splunk is amazingly cheap, too.
Underlying the “Splunk is expensive” claim is the assumption that all data will be indexed, which is rarely true.  A proper implementation includes an up-front analysis such that only the valuable subset of a company’s data is indexed.
Midsize and larger companies tend to purchase software and data licenses at bulk discounted rates.  This gives a discount off list price and provides predictability (no “bait and switch” surprises) after adoption. For less than the cost of a single skilled FTE in a G10 country you can index a huge amount of log data with Splunk across your IT infrastructure and earn tremendous efficiency cost savings. We’ve seen Splunk’s rates dropping over time, so it’s getting even cheaper. If you just need to dabble, do basic development and testing, or a proof of concept, Splunk offers a free Enterprise license up to 500MB per day.
The primary concern for sophisticated corporate buyers is cost-to-value  or total cost of ownership (TCO).  Data license costs (at least at the pricing level of Splunk or other log analyzers) tend to be the least important factor in the TCO equation.  The value of cost savings and new revenue discoveries can provide immense financial value that dwarf the license costs.
“Once we get ELK going it will be cheaper. The time to learn and get ELK figured out is an investment.”
Both Splunk and ELK can be installed and running quickly at a basic level with minimal learning time. Efficiency gains from reducing “Team GREP” activities (which never really identify or resolve underlying issues well), the actionable intelligence obtained (allowing you to optimize IT spend very quickly), and operational risk reductions all argue in favor of installing a log analyzer as soon as possible.
Once installed, the question then becomes “can Splunk or ELK knowledge can be rolled out through an organization quickly?”  Rapidly enabling multiple users to take advantage of indexing, correlation and dashboarding capabilities is necessary to generate business value from a log analyzer.
Splunk is far ahead of ELK in speed of roll out and depth of coverage.  Splunk offers a rich education program, a Professional Services group and an expansive network of skilled consulting partners.  Getting a team Splunk certified takes less than 1 month.  Hiring a Splunk partner firm to roll out capabilities quickly and build advanced correlation apps can further shortcut the time-to-value.  Splunk has a large App Store with hundreds of free and paid apps to connect to standard IT hardware and software platforms. ELK is growing rapidly and is making similar education efforts, but is years behind Splunk in these critical areas.
“Beware the hidden costs.”
Compared to Splunk, we believe the investment required to institutionalize ELK is far more time-consuming and costly in terms of lost efficiencies and investment dollars than buying Splunk.  An ELK user is likely to find that they must create an entire infrastructure around knowledge transfer, skill-building and connectors to underlying log sources before rolling ELK out across the firm.These are hidden but serious costs of choosing ELK.  The incremental cost of a Splunk data license is much lower then the time and costs of building an in-house knowledge and support structure from scratch around ELK.
For a small organization with basic needs and a small IT support group, ELK is certainly a good investment.  It’s free from a license standpoint and rewards DIY approaches.  If you’re an IT development shop ELK is an excellent choice.
For large and sophisticated firms, especially financial institutions, energy companies, defense firms and the like, Splunk is cheaper in the long run.


Splunk Support vs. ELK Support

Part of the total value comparison between Splunk and ELK is support, both from the vendor and the surrounding community.
“Wait, ELK support isn’t free?”
If you’re going to implement and run mission-critical IT monitoring tools, then proper support level agreements (SLAs) and enterprise-level engineering processes are mandatory.
Splunk is a fully integrated indexing and analytics package with enterprise-level support from both Splunk, Inc. and the huge Splunk developer community. Buying a Splunk license provides these critical support items. Splunk has supported thousands of installations worldwide.

ELK now offers paid support, SLAs, etc. These services are not free and essentially push ELK into the “freemium” model.  Not a bad move on ELK’s part, just unproven.  ELK paid support doesn’t yet have experience supporting hundreds of large corporations.
“Is our IP protected?”
Splunk has operated a secure support infrastructure for years.
The ELK open source community is very active with support but there are no data confidentiality, security or IP protections when sharing an issue with the ELK community. This lack of IP protection does not pass stringent financial, healthcare or defense industry requirements.
“What if something goes really wrong?”
ELK = elasticsearch + logstash + kibana. These are three different businesses (aka “projects”) which have a symbiotic relationship.  Relying on three different open source businesses for one solution carries significant operational and legal risk.
Splunk, Inc. is a major publicly listed company with deep pockets and well-defined operations. They are a viable and stable corporate partner presenting little risk.
“Can I trust the vendor’s partners to do solid work?”
Splunk has a rich network of preferred domain experts like Risk Focus that must go through intensive training and certification. The installation and app development work Splunk partners do is generally reliable and meets mission-critical engineering standards.
ELK consultancies are emerging, but the community does not yet have a disciplined and well-trained network of implementation partners held firmly to common standards.  ELK has no leverage over independent ELK implementation consultants to follow their standards.  Buyer beware.
We are open source proponents, but these issues merit serious consideration when dealing with complex regulated institutions like banks, brokers, asset managers and exchanges.



Splunk Apps vs. ELK Plugins

One of the major areas of value when adopting an enterprise-wide technology is the ability to easily integrate it with your existing IT infrastructure and applications.
Integration is normally achieved through pre-built adapters, ETL components, apps, plugins and the like.   The more pre-built adapters available, the faster a company can roll out the technology and obtain value without spending significant sums on development, testing and integration.

The Splunk App Store is a rich source of free and paid apps which you can use off the shelf or customize.  Splunk’s app selection covers most generic enterprise software (Outlook, SQL Server, Oracle, etc.), messaging solutions (TIBCO), databases, servers, firewalls, security applications, gateways, etc.
ELK has far fewer pre-built apps (plugins).  Elasticsearch has a number of them.  Most are scattered around different vendors.  There is no comprehensive certification process for ELK plugins.  ELK is in major catch-up mode here and adopting ELK means lots of DIY development to extend it across the enterprise.
To our knowledge there are no financial services app specialists for ELK and very few, if any, in most other industry verticals.  The ELK community is growing fast so the landscape of specialists will change, but for now Splunk has substantially more penetration across industry verticals.

Thursday, December 11, 2014

Flow Programming - OpenFlow

Software Defined Networking (SDN) is a set of simple ideas that defines an abstract model for switching. In general, network devices are either host devices or transit devices. Host devices request and serve information, while transit devices move information to its intended recipient. SDN focuses on defining an abstract computing model and API for transit devices, or switches.
Every switch in the SDN model is composed of ports and tables. Packets arrive and leave the switch through ports. Tables consist of rows containing a classifier and set of actions. When a packet is indexed against a table a row is selected by finding the first classifier that best matches the packet. Once a row has been selected the set of actions in that row are applied to the packet. Actions can govern the treatment of packets (drop, forward, queue), as well as modify the packets state (rewrite, push, pop). Table state and port propoerties are manipulated by a well defined protocol or API.
This basic idea of is has been at the core of networking for quite some time. Companies have internally defined instances of this model for their networking devices. Additionally, standards bodies have attempted to codify this model (ForCES, MIDCOM). The most significant contribution of today's SDN movement has been its traction. There are several instances of the SDN model, in terms of the OpenFlow protocol, which have been widely supported. There are existing network operators using SDN in transport networks, data center networks, and enterprise networks.

Specification  https://www.opennetworking.org/images/stories/downloads/sdn-resources/onf-specifications/openflow/openflow-spec-v1.3.1.pdf




Monday, November 24, 2014

packer.io - vagrant

Interestingly i have been using packer.io for a long time for creating OVF for my runtime environment.

I wanted to use a stable VM configuration across my development teams for dev purposes, which took me to use the vagrantup.

Just wondering what the clear usecase and which to be used for what, pondering around for a while i am sharing my thoughts here

KEY :

*Important difference is vagrant takes a base box from vagrant image over https://files.vagrantup.com/precise64.box
while the packer can work offline with local ISO software image.


packer.io ( create a virtual machine in a format )
  • is a tool to create identical AMI,OVF,OVA images of VM
  • Need a base image to start with an something like a debian ISO ...( i used a local folder where i have all images downloaded) so my packer works offline
  • Mostly used to create VMs
  • configuration is JSON representation 
  • Copied (https://groups.google.com/forum/#!msg/packer-tool/4lB4OqhILF8/NPoMYeew0sEJ)* You want a custom OS or custom prepared box. You're generally tied to publicly available boxes (such as the ones I make). With Packer, you can create your own custom box that has your own OS installed in some specific way. This is useful if you want to match what runs in production more closely.
     
vagrant ( is to configure your virtual machine aka vBox)
  •  pretty much virtual box
  • Works over https to pull stable images
  • Copied (https://groups.google.com/forum/#!msg/packer-tool/4lB4OqhILF8/NPoMYeew0sEJ)You want to pre-run provisioners so that `vagrant up` is faster. With Packer, you can run Chef/Puppet/etc. beforehand and bake it into an image, essentially getting rid of the provision time when doing a `vagrant up`. This can speed things up drastically. A lot of bigger companies do this sort of thing because running provisioning is pretty heavy

Other useful links
http://pretengineer.com/post/packer-vagrant-infra/
https://github.com/fredhsu/odl-vagrant
https://fredhsu.wordpress.com/2013/11/04/vagrant-with-opendaylight/
https://github.com/mitchellh/packer
https://docs.vagrantup.com/v2/installation/

Thursday, November 20, 2014

Docker OVS




I was looking for setting up virtual switch inside docker and came across this very useful slide presented in meetup.

It is very interesting , also never forget to read this link in particular to docker security

https://docs.docker.com/articles/security/
https://docs.docker.com/articles/https/





Tuesday, November 4, 2014

HP VAN SDN - One Stop Shop

HP VAN SDN controller - documentation collections

How to setup SSH on public git repo

Just a quick note on how to set up SSH keys for git if you using windows

These are the steps I’ve used for connecting to GitHub with SSH on Windows using the PuTTY tools.

Always remember any git project has a web interface , user need an account to clone, and it takes a list of "keys"  (SSH Public ) to be added to this web interface.

Step 1 - Download Putty Tools

Download these three PuTTY tools:
  1. plink.exe
  2. pageant.exe
  3. puttygen.exe
Move them to some place permanent, for example c:\bin.

Step 2 - Add GIT_SSH Environment Variable

Find Windows enviornment variable settings in Control Panel Set name to GIT_SSH and value to the location of plink.exe.
Add new environment variable

Step 3 - Create a Key

Use puttygen.exe to generate and public/private key.
Putty key generator example Save the private key somewhere with a passphrase and then copy the public key text to the clipboard.
Select and copy the public key text

Step 4 - Add Key to GitHub

Login to GitHub and under Account settings > SSH Keys add a new key and paste your key.
Add new SSH key to GitHub

Key (PKI) Management for Dummies

Courtesey : Thales Security

Saturday, October 25, 2014

Algorithmic Complexities - Big O

Cheat Sheet Big-O Algorithm Complexity Cheat Sheet

Know Thy Complexities!

Hi there!  This webpage covers the space and time Big-O complexities of common algorithms used in Computer Science.  When preparing for technical interviews in the past, I found myself spending hours crawling the internet putting together the best, average, and worst case complexities for search and sorting algorithms so that I wouldn't be stumped when asked about them.  Over the last few years, I've interviewed at several Silicon Valley startups, and also some bigger companies, like Yahoo, eBay, LinkedIn, and Google, and each time that I prepared for an interview, I thought to myself "Why oh why hasn't someone created a nice Big-O cheat sheet?".  So, to save all of you fine folks a ton of time, I went ahead and created one.  Enjoy!
Good Fair Poor

Searching

Algorithm Data Structure Time Complexity Space Complexity
Average Worst Worst
Depth First Search (DFS) Graph of |V| vertices and |E| edges - O(|E| + |V|) O(|V|)
Breadth First Search (BFS) Graph of |V| vertices and |E| edges - O(|E| + |V|) O(|V|)
Binary search Sorted array of n elements O(log(n)) O(log(n)) O(1)
Linear (Brute Force) Array O(n) O(n) O(1)
Shortest path by Dijkstra,
using a Min-heap as priority queue
Graph with |V| vertices and |E| edges O((|V| + |E|) log |V|) O((|V| + |E|) log |V|) O(|V|)
Shortest path by Dijkstra,
using an unsorted array as priority queue
Graph with |V| vertices and |E| edges O(|V|^2) O(|V|^2) O(|V|)
Shortest path by Bellman-Ford Graph with |V| vertices and |E| edges O(|V||E|) O(|V||E|) O(|V|)

More Cheat Sheets

Sorting

Algorithm Data Structure Time Complexity Worst Case Auxiliary Space Complexity
Best Average Worst Worst
Quicksort Array O(n log(n)) O(n log(n)) O(n^2) O(n)
Mergesort Array O(n log(n)) O(n log(n)) O(n log(n)) O(n)
Heapsort Array O(n log(n)) O(n log(n)) O(n log(n)) O(1)
Bubble Sort Array O(n) O(n^2) O(n^2) O(1)
Insertion Sort Array O(n) O(n^2) O(n^2) O(1)
Select Sort Array O(n^2) O(n^2) O(n^2) O(1)
Bucket Sort Array O(n+k) O(n+k) O(n^2) O(nk)
Radix Sort Array O(nk) O(nk) O(nk) O(n+k)

Data Structures

Data Structure Time Complexity Space Complexity
Average Worst Worst
Indexing Search Insertion Deletion Indexing Search Insertion Deletion
Basic Array O(1) O(n) - - O(1) O(n) - - O(n)
Dynamic Array O(1) O(n) O(n) O(n) O(1) O(n) O(n) O(n) O(n)
Singly-Linked List O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
Doubly-Linked List O(n) O(n) O(1) O(1) O(n) O(n) O(1) O(1) O(n)
Skip List O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) O(n) O(n) O(n log(n))
Hash Table - O(1) O(1) O(1) - O(n) O(n) O(n) O(n)
Binary Search Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n) O(n) O(n) O(n) O(n)
Cartresian Tree - O(log(n)) O(log(n)) O(log(n)) - O(n) O(n) O(n) O(n)
B-Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)
Red-Black Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)
Splay Tree - O(log(n)) O(log(n)) O(log(n)) - O(log(n)) O(log(n)) O(log(n)) O(n)
AVL Tree O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(n)

Heaps

Heaps Time Complexity
Heapify Find Max Extract Max Increase Key Insert Delete Merge
Linked List (sorted) - O(1) O(1) O(n) O(n) O(1) O(m+n)
Linked List (unsorted) - O(n) O(n) O(1) O(1) O(1) O(1)
Binary Heap O(n) O(1) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(m+n)
Binomial Heap - O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n)) O(log(n))
Fibonacci Heap - O(1) O(log(n))* O(1)* O(1) O(log(n))* O(1)

Graphs

Node / Edge Management Storage Add Vertex Add Edge Remove Vertex Remove Edge Query
Adjacency list O(|V|+|E|) O(1) O(1) O(|V| + |E|) O(|E|) O(|V|)
Incidence list O(|V|+|E|) O(1) O(1) O(|E|) O(|E|) O(|E|)
Adjacency matrix O(|V|^2) O(|V|^2) O(1) O(|V|^2) O(1) O(1)
Incidence matrix O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|V| ⋅ |E|) O(|E|)

Notation for asymptotic growth

letter bound growth
(theta) Θ upper and lower, tight[1] equal[2]
(big-oh) O upper, tightness unknown less than or equal[3]
(small-oh) o upper, not tight less than
(big omega) Ω lower, tightness unknown greater than or equal
(small omega) ω lower, not tight greater than
[1] Big O is the upper bound, while Omega is the lower bound. Theta requires both Big O and Omega, so that's why it's referred to as a tight bound (it must be both the upper and lower bound). For example, an algorithm taking Omega(n log n) takes at least n log n time but has no upper limit. An algorithm taking Theta(n log n) is far preferential since it takes AT LEAST n log n (Omega n log n) and NO MORE THAN n log n (Big O n log n).SO
[2] f(x)=Θ(g(n)) means f (the running time of the algorithm) grows exactly like g when n (input size) gets larger. In other words, the growth rate of f(x) is asymptotically proportional to g(n).
[3] Same thing. Here the growth rate is no faster than g(n). big-oh is the most useful because represents the worst-case behavior.
In short, if algorithm is __ then its performance is __
algorithm performance
o(n) < n
O(n) ≤ n
Θ(n) = n
Ω(n) ≥ n
ω(n) > n

Big-O Complexity Chart

This interactive chart, created by our friends over at MeteorCharts, shows the number of operations (y axis) required to obtain a result as the number of elements (x axis) increase.  O(n!) is the worst complexity which requires 720 operations for just 6 elements, while O(1) is the best complexity, which only requires a constant number of operations for any number of elements.
comments powered by Disqus

Thursday, October 23, 2014

Cisco Smart Licensing

COMPASS - SDN


Compass first meetup from Shuo Yang


I was recently in a meetup and here are some of the leads here...

Although the demo in the meetup did not work or the preparation was not enough, they are clueless and played a youtube recorded instance of COMPASS installation and deployment.
I am interested in looking forward in this project.

http://www.slideshare.net/seanatpurdue/compass-first-meetup?ref=http://www.meetup.com/openstack/events/209517432/?a=md1_grp&rv=md1&_af_eid=209517432&_af=event 

JAX-R

https://jax-rs-spec.java.net/
http://docs.oracle.com/javaee/6/tutorial/doc/giepu.html

Open DayLight Tutorial

Open Day Light Tutorial

Tuesday, September 16, 2014

V8 Java Script to machine language


NodeJS is a java script framework which  works over V8 Engine.
Which translate V8 into machine code directly based on the processor type, a mapping of code from java script into machine code is done.



Also Node JS is an asynchronous event driven framework, Node.js is designed to build scalable network applications. In the following "hello world" example, many connections can be handled concurrently. Upon each connection the callback is fired, but if there is no work to be done Node is sleeping.



 











http://www.youtube.com/watch?v=hWhMKalEicY





Friday, July 25, 2014

ovs-ofctl commands on OpenFlow 1.3 Mininet switch (ovsk)

ovs-ofctl commands on OpenFlow 1.3 Mininet switch (ovsk)

ovs−ofctl program is a command line tool for monitoring and administering OpenFlow switches. It can also show the current state of an OpenFlow switch, including features, configuration, and table entries. It should work with any OpenFlow switch, not just Open vSwitch.

Before pushing the flows we need to start mininet switch. using below command(also shown in snapshot).
sudo mn --topo single,2 --controller remote,ip=192.168.56.103:6653 --switch ovsk,protocols=OpenFlow13
where,
192.168.56.103 is openflowplugin Controllers IP Address and protocols=OpenFlow13 states that we need to use OpenFlow protocol version 1.3, tcp/6653 is used for OF1.3 communication and 6633 for OF1.0.
Point to note here, Mininet and Controller are running on different Virtual Machines.


 If the above command is successfully executed we should see OF1.3 communication between OVSK(switch s1 here) and SDN Controller.
Flows can be added as
sudo ovs-ofctl -O Openflow13 add-flow s1 in_port=1,actions=nw_ttl:2,output:2

sudo ovs-ofctl -O OpenFlow13 add-flow s1 priority=11,dl_type=0x0800,nw_src=10.0.0.1,action=mod_tp_dst:8888

If the above command is successfully configured on OVSK we should successfully dump flows.
mininet@mininet-vm:~$ sudo ovs-ofctl -O OpenFlow13 dump-flows s1
OFPST_FLOW reply (OF1.3) (xid=0x2):
 cookie=0x0, duration=7.443s, table=0, n_packets=0, n_bytes=0, priority=11,ip,nw_src=10.0.0.1 actions=mod_tp_dst:8888


ovs-ofctl connects to an OpenFlow switch using ssl, tcp(ip and port), socket file, unix file etc. ovs-ofctl talks to ovs-vswitchd, and ovs-vsctl talks to ovsdb-server.

Detailed options can be found at
http://openvswitch.org/cgi-bin/ovsman.cgi?page=utilities%2Fovs-ofctl.8

Big Switch SDN Fabric




http://bigswitch.com/blog/2014/07/22/announcing-big-cloud-fabric-the-first-data-center-bare-metal-sdn-fabric


http://www.amazon.com/The-Big-Switch-Rewiring-Edison/dp/039334522X/ref=cm_cr_pr_pb_t






Screen Shot 2014-07-22 at 8.59.41 AM



Next Generation Monitoring Fabric diagram

Wednesday, July 23, 2014

Tcpdump usage examples

Tcpdump usage examples

March 13, 2010
In most cases you will need root permission to be able to capture packets on an interface. Using tcpdump (with root) to capture the packets and saving them to a file to analyze with Wireshark (using a regular account) is recommended over using Wireshark with a root account to capture packets on an "untrusted" interface. See the Wireshark security advisories for reasons why.
See the list of interfaces on which tcpdump can listen:
tcpdump -D
Listen on interface eth0:
tcpdump -i eth0
Listen on any available interface (cannot be done in promiscuous mode. Requires Linux kernel 2.2 or greater):
tcpdump -i any
Be verbose while capturing packets:
tcpdump -v
Be more verbose while capturing packets:
tcpdump -vv
Be very verbose while capturing packets:
tcpdump -vvv
Be less verbose (than the default) while capturing packets:
tcpdump -q
Limit the capture to 100 packets:
tcpdump -c 100
Record the packet capture to a file called capture.cap:
tcpdump -w capture.cap
Record the packet capture to a file called capture.cap but display on-screen how many packets have been captured in real-time:
tcpdump -v -w capture.cap
Display the packets of a file called capture.cap:
tcpdump -r capture.cap
Display the packets using maximum detail of a file called capture.cap:
tcpdump -vvv -r capture.cap
Display IP addresses and port numbers instead of domain and service names when capturing packets:
tcpdump -n
Capture any packets where the destination host is 192.168.1.1. Display IP addresses and port numbers:
tcpdump -n dst host 192.168.1.1
Capture any packets where the source host is 192.168.1.1. Display IP addresses and port numbers:
tcpdump -n src host 192.168.1.1
Capture any packets where the source or destination host is 192.168.1.1. Display IP addresses and port numbers:
tcpdump -n host 192.168.1.1
Capture any packets where the destination network is 192.168.1.0/24. Display IP addresses and port numbers:
tcpdump -n dst net 192.168.1.0/24
Capture any packets where the source network is 192.168.1.0/24. Display IP addresses and port numbers:
tcpdump -n src net 192.168.1.0/24
Capture any packets where the source or destination network is 192.168.1.0/24. Display IP addresses and port numbers:
tcpdump -n net 192.168.1.0/24
Capture any packets where the destination port is 23. Display IP addresses and port numbers:
tcpdump -n dst port 23
Capture any packets where the destination port is is between 1 and 1023 inclusive. Display IP addresses and port numbers:
tcpdump -n dst portrange 1-1023
Capture only TCP packets where the destination port is is between 1 and 1023 inclusive. Display IP addresses and port numbers:
tcpdump -n tcp dst portrange 1-1023
Capture only UDP packets where the destination port is is between 1 and 1023 inclusive. Display IP addresses and port numbers:
tcpdump -n udp dst portrange 1-1023
Capture any packets with destination IP 192.168.1.1 and destination port 23. Display IP addresses and port numbers:
tcpdump -n "dst host 192.168.1.1 and dst port 23"
Capture any packets with destination IP 192.168.1.1 and destination port 80 or 443. Display IP addresses and port numbers:
tcpdump -n "dst host 192.168.1.1 and (dst port 80 or dst port 443)"
Capture any ICMP packets:
tcpdump -v icmp
Capture any ARP packets:
tcpdump -v arp
Capture either ICMP or ARP packets:
tcpdump -v "icmp or arp"
Capture any packets that are broadcast or multicast:
tcpdump -n "broadcast or multicast"
Capture 500 bytes of data for each packet rather than the default of 68 bytes:
tcpdump -s 500
Capture all bytes of data within the packet:
tcpdump -s 0

RYU SDN Framework Installation



Installing RYU

 % pip install ryu
or
% git clone git://github.com/osrg/ryu.git
% cd ryu; python ./setup.py install 

Issues faced
#sudo ryu-manager --verbose --observe-links ryu.app.ws_topology
Traceback (most recent call last):
  File "/usr/local/bin/ryu-manager", line 5, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2707, in <module>
    working_set.require(__requires__)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 686, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 584, in resolve
    raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: six>=1.4.0

#ryu -version
Traceback (most recent call last):
  File "/usr/local/bin/ryu", line 5, in <module>
    from pkg_resources import load_entry_point
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 2707, in <module>
    working_set.require(__requires__)
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 686, in require
    needed = self.resolve(parse_requirements(requirements))
  File "/usr/lib/python2.7/dist-packages/pkg_resources.py", line 584, in resolve
    raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: six>=1.4.0


Solution
#sudo easy_install Distribute
Searching for Distribute
Best match: distribute 0.6.24dev-r0
Adding distribute 0.6.24dev-r0 to easy-install.pth file
Installing easy_install script to /usr/local/bin
Installing easy_install-2.7 script to /usr/local/bin
Installing easy_install-2.6 script to /usr/local/bin

Using /usr/lib/python2.7/dist-packages
Processing dependencies for Distribute
Finished processing dependencies for Distribute


sudo easy_install -U Distribute
Searching for Distribute
Reading http://pypi.python.org/simple/Distribute/
Best match: distribute 0.7.3
Downloading https://pypi.python.org/packages/source/d/distribute/distribute-0.7.3.zip#md5=c6c59594a7b180af57af8a0cc0cf5b4a
Processing distribute-0.7.3.zip
Running distribute-0.7.3/setup.py -q bdist_egg --dist-dir /tmp/easy_install-IggX2I/distribute-0.7.3/egg-dist-tmp-Q36wpI
warning: install_lib: 'build/lib.linux-x86_64-2.7' does not exist -- no Python modules to install

Adding distribute 0.7.3 to easy-install.pth file

Installed /usr/local/lib/python2.7/dist-packages/distribute-0.7.3-py2.7.egg
Processing dependencies for Distribute
Searching for setuptools>=0.7
Reading http://pypi.python.org/simple/setuptools/
Reading http://peak.telecommunity.com/snapshots/
Reading https://bitbucket.org/pypa/setuptools
Reading https://pypi.python.org/pypi/setuptools
Best match: setuptools 5.4.1
Downloading https://pypi.python.org/packages/source/s/setuptools/setuptools-5.4.1.zip#md5=96bd961ab481c78825a5be8546f42a66
Processing setuptools-5.4.1.zip
Running setuptools-5.4.1/setup.py -q bdist_egg --dist-dir /tmp/easy_install-GGJ3Yb/setuptools-5.4.1/egg-dist-tmp-AxT2X2
Adding setuptools 5.4.1 to easy-install.pth file
Installing easy_install script to /usr/local/bin
Installing easy_install-2.7 script to /usr/local/bin

Installed /usr/local/lib/python2.7/dist-packages/setuptools-5.4.1-py2.7.egg
Finished processing dependencies for Distribute


 Solved

#ryu
usage: ryu [-h] [--config-dir DIR] [--config-file PATH] [--version]
           [subcommand] ...

positional arguments:
  subcommand          [rpc-cli|run|of-config-cli]
  subcommand_args     subcommand specific arguments

optional arguments:
  -h, --help          show this help message and exit
  --config-dir DIR    Path to a config directory to pull *.conf files from.
                      This file set is sorted, so as to provide a predictable
                      parse order if individual options are over-ridden. The
                      set is parsed after the file(s) specified via previous
                      --config-file, arguments hence over-ridden options in
                      the directory take precedence.
  --config-file PATH  Path to a config file to use. Multiple config files can
                      be specified, with values in later files taking
                      precedence. The default files used are: None.
  --version           show program's version number and exit

bee-social