Wednesday, May 6, 2015

Panamax


  • Panamax is a open source project from Centurylinklabs
  • It provides an user interface to manage docker containers in a host.
  • As i tried now, it comes as a vagrant box image , which has CoreOS + panamax installed on it, few ports are opened to reach out this VM from host
  • Some good are
    • Panamax has a User Interface it lets you see list of containers co-hosted in this VM
    • See stats of containers - hooked up with google/CAdvisor
    • Search docker.io repo for new images - BUT when tried to Add Image it does not work
  • Lot of bad are
    • There are too many github small projects under centurylinklabs and i am not sure what this guys are trying to do? Looks too amateur to me .
    • Code base is too buggy
    • Open Issues are not answered

Docker aka Container management

I am recently playing more with dockers and containers as my job demand it.

Jotting down some useful details here on my learning for general public and for my self records as well.

What is docker?
A Startup company in San Francisco, California , created a simple linux tool to manage linux containers.

Why one need docker?
Like VMWare's virtualization technology is for virtualization .Docker is the answer to provide next generation light weight virtualization technology. Docker is made to ease things out here.

What is a Container?
Container exist for a long time in linux world, it is a instance of another linux inside a linux operating system.

Why we need a Operating System inside another Operating System?
Virtualization is all about making efficient use of  "COMPUTE", "STORAGE" and "NETWORKING" . As new software applications or upgrade are rolled out every second in data centers for modern needs like (social networking, chatting, video, etc... ), a big chunk of compute power is needed. For ease of management of software applications in data centers a smallest possible workload bundle is needed. For which a host ( LINUX OS) should be capable of running multiple smaller guest LINUX OS ( which sounds like the virtual machine concept , but not : containers are super light weight compared to virtual machines.

Tuesday, January 6, 2015

Making sense of SDN

I came across this three part video in youtube, which was recorded back in August 2012, i see it is interesting and useful.






OpenSSL User Interface

An excellent tool for OpenSSL (working version) tool i came across http://sourceforge.net/projects/opensslui/

  • It has a installable windows UI
  • It uses OpenSSL equivalent command to create and manage CA certs, Keys for PKI



References

Network Policy Management

Way too many times we had come across about the complexities with setting up policies / rules in network equipments following their proprietary approaches.

In the end , there is no one-stop-shop ( a CLI or single pane ) to set up this classic multi-vendor problem.

As a simple rule of thumb, a network service policy , when written as words looks like this
"Input trunk + forwarding policy = output truck"

A centralized control traffic management standards is the Open Flow Protocol and managing policies to be applied with distinct network equipment.

An orchestrator project like openstack , drives this thru initiatives like https://wiki.openstack.org/wiki/Congress

Policy Language

The policy language for Congress is Datalog, which is basically SQL but with a syntax that is closer to traditional programming languages. This declarative language was chosen because its semantics are well-known to a broad range of DevOps, yet its syntax is more terse making it better suited for expressing real-world policies. The grammar is given below.
<policy> ::= <rule>*
<rule> ::= <atom> COLONMINUS <literal> (COMMA <literal>)*
<literal> ::= <atom>
<literal> ::= NOT <atom>
<atom> ::= TABLENAME LPAREN <term> (COMMA <term>)* RPAREN
<term> ::= INTEGER | FLOAT | STRING | VARIABLE  

Tuesday, December 23, 2014

GRE Tunnel in OVS

Quick command references to set up GRE tunnel with OVS 


/*Add a bridge br0*/
sudo ovs-vsctl add-br br0

/*Add a temp internal interface to connect to guest VM*/
sudo ovs-vsctl add-port br0 tep0 -- set interface tep0 type=internal



/*set new subnet IP for guest to reach other end of GRE tun*/
sudo ifconfig tep0 192.168.200.20 netmask 255.255.255.0


/*create a GRE(Genric Encapsulation) tunnel interface */
sudo ovs-vsctl add-port br2 gre0 -- set interface gre0 type=gre options:remote_ip=<GRE tunnel endpoint on other hypervisor>

#sudo ovs-vsctl show
3fb59768-6943-496f-8963-2cd2e2935075
    Bridge "br0"
        Port "br0"
            Interface "br0"
                type: internal
        Port "tep0"
            Interface "tep0"
                type: internal
    ovs_version: "2.0.2"

Monday, December 22, 2014

2015 Server Giants and NGN (Next Gen.Networks) - CIMI Blog

Posted: 22 Dec 2014 05:23 AM PST
Next year is going to be pivotal in telecom, because it’s almost surely going to set the stage for the first real evolution of network infrastructure we’ve had since IP convergence twenty years ago.  We’re moving to the “next-generation network” everyone has talked about, but with two differences.  First, this is for real, not just media hype.  Second, this “network” will be notable not for new network technology but for the introduction of non-network technology—software and servers—into networking missions.
Today we begin our review of the network players of 2015 and beyond, focusing on how these companies are likely to fare in the transition to what’s popularly called “NGN”.  As I said in my blog of December 19th, I’m going to begin with the players with the most to gain, the group from which the new powerhouse will emerge if NGN evolution does happen.  That group is the server vendors, and it includes (in alphabetical order) Cisco, Dell, HP, IBM, and Oracle.
The big advantage this group has is that they can expect to make money from any network architecture that relies on hosted functionality.  While it’s often overlooked as a factor in determining market leadership during periods of change, one of the greatest assets a vendor can have is a justification to endure through a long sales cycle.  Salespeople don’t work for free, and companies can’t focus their sales effort on things that aren’t going to add to their profits.  When you have a complicated transformation to drive, you have to be able to make a buck out of the effort.
The challenge that SDN and NFV have posed for the server giants is that the servers that are the financial heart of the SDN/NFV future are part of the plumbing.  “NFVI” or NFV Infrastructure is just what you run management, orchestration, and virtual network functions on.  It’s MANO and VNFs that build the operators’ business case.  So do these server players try to push their own MANO/VNF solutions and risk limiting their participation in the server-hosted future to those wins they get?  Do they sit back to try to maximize their NFVI opportunity and risk not being a part of any of the early deals because they can’t drive the business case?
The vendor who’s taken the “push-for-the-business-case” route most emphatically is HP, whose OpenNFV architecture is the most functionally complete and now is largely delivered as promised.  A good part of HP’s aggression here may be due to the fact that they’re the only player whose NFV efforts are led by a cloud executive, effectively bringing the two initiatives together.  HP also has a partner ecosystem that’s actually enthusiastic and dedicated, not just hanging around to get some good ink.  HP is absolutely a player who could build a business case for NFV, and their OpenDaylight and OpenStack support means they could extend the NGN umbrella over all three of our revolutions—the cloud, SDN, and NFV.  They are also virtually unique in the industry in offering support for legacy infrastructure in their MANO (Director) product.
Their biggest risk is their biggest strength—the scope of what they can do.  You need to have impeccable positioning and extraordinary collateral to make something like NFV, SDN, or cloud infrastructure a practical sell.  Otherwise you ask your sales force to drag people from disinterested spectators to committed customers on their own, which doesn’t happen.  NGN is the classic elephant behind a screen, but it’s a really big elephant with an unprecedentedly complicated anatomy to grope.  Given that they have all the cards to drive the market right now, their biggest risk is delay that gives others a chance to catch up.  Confusion in the buyer space could do that, so HP is committed (whether they know it or not) to being the go-to people on NGN, in order to win it.
The vendors who seem to represent the “sit-back” view?  Everyone else, at this point, but for different reasons.
Cisco’s challenge is that all of the new network technologies are looking like less-than-zero-sum games in a capital spending sense.  As the market leader in IP and Ethernet technologies, Cisco is likely to lose at least as much in network equipment as it could hope to gain in servers.  Certainly they’d need a superb strategy to realize opex efficiency and service agility to moderate their risks, and Cisco has never been a strategic leader—they like “fast-followership” as an approach.
Dell seems to have made an affirmative choice to be an NFVI leader, hoping to be the fair arms merchant and not a combatant in making the business case for NGN.  This may sound like a wimpy choice, but as I’ve noted many times NGN transformation is very complicated.  Dell may reason that a non-network vendor has little chance in driving this evolution, and that if they fielded their own solutions they’d be on the outs with all the network vendors who push evolution along.  Their risks are obvious—the miss the early market and miss chances to differentiate themselves on features.
IBM’s position in NFV is the most ambiguous of any of the giants.  They are clearly expanding their cloud focus, but they sold off their x86 server business to Lenovo and now have far less to gain from the NGN transformation than any of the others in this category.  Their cloud orchestration tools are a strong starting point for a good NFV MANO solution, but they don’t seem interested in promoting the connection.  It’s hard to see why they’d hang back this long and suddenly get religion, and so their position may well stay ambiguous in 2015.
Oracle has, like HP, announced a full-suite NFV strategy, but they’ve yet to deliver on the major MANO element and their commitment doesn’t seem as fervent to me.  Recall that Oracle was criticized for pooh-poohing the cloud, then jumping in when it was clear that there was opportunity to be had.  I think they’re likely doing that with SDN, NFV, and NGN.  What I think makes this strategy a bit less sensible is that Oracle’s server business could benefit hugely from dominance in NFV.  In fact, carrier cloud and NFV could single-handedly propel Oracle into second place in the market (they recently slipped beyond Cisco).  It’s not clear whether Oracle is still waiting for the sign of NFV success, or will jump off their new positioning to make a go at market leadership.
I’m not a fan of the “wait-and-hope” school of marketing, I confess.  That makes me perhaps a secret supporter of action which makes me sympathetic to HP’s approach more than those of the others in this group.  Objectively, I can’t see how anyone can hope to succeed in an equipment market whose explicit goal is to support commodity devices, except on price and with much pain.  If you don’t want stinking margins you want feature differentiation, and features are attributes of higher layers of SDN and NFV and the cloud.  If those differentiating features are out there, only aggression is going to get to them.  If they’re available in 2015 then only naked aggression will do.  So while I think HP is the lead player now even they’ll have to be on top of their game to get the most from NGN evolution.

Wednesday, December 17, 2014

Search - Splunk, Elasticsearch , Log Management - Sumologic , logstash

Elasticsearch says it's on a mission to make massive amounts of data available to businesses. Toward that end, in February the company debuted release 1.0 of its Elasticsearch search and analytics software for working with big data. The software is part of the company's open-source ELK stack that includes Elasticsearch, Logstash (log management) and Kibana (log visualization) software.
Elasticsearch is competing with Solr, the open-source search technology from the Apache Software Foundation. Many reviewers, users and solution providers give both technologies high marks. So it will be interesting to see if both gain traction, or if one or the other wins out.


  • Logstash is an agent to collect and mine logs : https://github.com/logstash-plugins
  • Also see sumologic where they claim they have their  log management agent in AWS



Risk Focus is an expert implementation partner of Splunk in the financial services arena and a big proponent of the value Splunk delivers.  We have implemented Splunk in numerous banks, funds and clearing/exchange firms across multiple mission-critical use cases.
Yet Splunk is not the only log analyzer out there.  In particular, ELK (elasticsearch + logstash + kibana) is a growing open source offering which purports to have the same functionality as Splunk at virtually no cost.
Is this a valid claim? Do Splunk and ELK provide similar functionality, stability and technical richness needed by corporate institutions like banks, asset managers, hedge funds, exchanges, industry utilities and major technology providers?

Cost of Splunk vs. ELK

Let’s start by taking a look at the overall cost of investing in Splunk versus ELK.  Corporate buyers invariably look at total cost of ownership (TCO) when making their buying decision because they want to know the true “all in cost” not just the up front fee.
“Splunk is expensive and ELK is free.”
A web search will turn up hundreds of blog entries claiming Splunk’s pay-per-gigabyte-indexed pricing model is expensive. Splunk data indexing charges sound pricey, but the way the pricing actually works is far cheaper than it first appears. Yes, an ELK license is free but Splunk is amazingly cheap, too.
Underlying the “Splunk is expensive” claim is the assumption that all data will be indexed, which is rarely true.  A proper implementation includes an up-front analysis such that only the valuable subset of a company’s data is indexed.
Midsize and larger companies tend to purchase software and data licenses at bulk discounted rates.  This gives a discount off list price and provides predictability (no “bait and switch” surprises) after adoption. For less than the cost of a single skilled FTE in a G10 country you can index a huge amount of log data with Splunk across your IT infrastructure and earn tremendous efficiency cost savings. We’ve seen Splunk’s rates dropping over time, so it’s getting even cheaper. If you just need to dabble, do basic development and testing, or a proof of concept, Splunk offers a free Enterprise license up to 500MB per day.
The primary concern for sophisticated corporate buyers is cost-to-value  or total cost of ownership (TCO).  Data license costs (at least at the pricing level of Splunk or other log analyzers) tend to be the least important factor in the TCO equation.  The value of cost savings and new revenue discoveries can provide immense financial value that dwarf the license costs.
“Once we get ELK going it will be cheaper. The time to learn and get ELK figured out is an investment.”
Both Splunk and ELK can be installed and running quickly at a basic level with minimal learning time. Efficiency gains from reducing “Team GREP” activities (which never really identify or resolve underlying issues well), the actionable intelligence obtained (allowing you to optimize IT spend very quickly), and operational risk reductions all argue in favor of installing a log analyzer as soon as possible.
Once installed, the question then becomes “can Splunk or ELK knowledge can be rolled out through an organization quickly?”  Rapidly enabling multiple users to take advantage of indexing, correlation and dashboarding capabilities is necessary to generate business value from a log analyzer.
Splunk is far ahead of ELK in speed of roll out and depth of coverage.  Splunk offers a rich education program, a Professional Services group and an expansive network of skilled consulting partners.  Getting a team Splunk certified takes less than 1 month.  Hiring a Splunk partner firm to roll out capabilities quickly and build advanced correlation apps can further shortcut the time-to-value.  Splunk has a large App Store with hundreds of free and paid apps to connect to standard IT hardware and software platforms. ELK is growing rapidly and is making similar education efforts, but is years behind Splunk in these critical areas.
“Beware the hidden costs.”
Compared to Splunk, we believe the investment required to institutionalize ELK is far more time-consuming and costly in terms of lost efficiencies and investment dollars than buying Splunk.  An ELK user is likely to find that they must create an entire infrastructure around knowledge transfer, skill-building and connectors to underlying log sources before rolling ELK out across the firm.These are hidden but serious costs of choosing ELK.  The incremental cost of a Splunk data license is much lower then the time and costs of building an in-house knowledge and support structure from scratch around ELK.
For a small organization with basic needs and a small IT support group, ELK is certainly a good investment.  It’s free from a license standpoint and rewards DIY approaches.  If you’re an IT development shop ELK is an excellent choice.
For large and sophisticated firms, especially financial institutions, energy companies, defense firms and the like, Splunk is cheaper in the long run.


Splunk Support vs. ELK Support

Part of the total value comparison between Splunk and ELK is support, both from the vendor and the surrounding community.
“Wait, ELK support isn’t free?”
If you’re going to implement and run mission-critical IT monitoring tools, then proper support level agreements (SLAs) and enterprise-level engineering processes are mandatory.
Splunk is a fully integrated indexing and analytics package with enterprise-level support from both Splunk, Inc. and the huge Splunk developer community. Buying a Splunk license provides these critical support items. Splunk has supported thousands of installations worldwide.

ELK now offers paid support, SLAs, etc. These services are not free and essentially push ELK into the “freemium” model.  Not a bad move on ELK’s part, just unproven.  ELK paid support doesn’t yet have experience supporting hundreds of large corporations.
“Is our IP protected?”
Splunk has operated a secure support infrastructure for years.
The ELK open source community is very active with support but there are no data confidentiality, security or IP protections when sharing an issue with the ELK community. This lack of IP protection does not pass stringent financial, healthcare or defense industry requirements.
“What if something goes really wrong?”
ELK = elasticsearch + logstash + kibana. These are three different businesses (aka “projects”) which have a symbiotic relationship.  Relying on three different open source businesses for one solution carries significant operational and legal risk.
Splunk, Inc. is a major publicly listed company with deep pockets and well-defined operations. They are a viable and stable corporate partner presenting little risk.
“Can I trust the vendor’s partners to do solid work?”
Splunk has a rich network of preferred domain experts like Risk Focus that must go through intensive training and certification. The installation and app development work Splunk partners do is generally reliable and meets mission-critical engineering standards.
ELK consultancies are emerging, but the community does not yet have a disciplined and well-trained network of implementation partners held firmly to common standards.  ELK has no leverage over independent ELK implementation consultants to follow their standards.  Buyer beware.
We are open source proponents, but these issues merit serious consideration when dealing with complex regulated institutions like banks, brokers, asset managers and exchanges.



Splunk Apps vs. ELK Plugins

One of the major areas of value when adopting an enterprise-wide technology is the ability to easily integrate it with your existing IT infrastructure and applications.
Integration is normally achieved through pre-built adapters, ETL components, apps, plugins and the like.   The more pre-built adapters available, the faster a company can roll out the technology and obtain value without spending significant sums on development, testing and integration.

The Splunk App Store is a rich source of free and paid apps which you can use off the shelf or customize.  Splunk’s app selection covers most generic enterprise software (Outlook, SQL Server, Oracle, etc.), messaging solutions (TIBCO), databases, servers, firewalls, security applications, gateways, etc.
ELK has far fewer pre-built apps (plugins).  Elasticsearch has a number of them.  Most are scattered around different vendors.  There is no comprehensive certification process for ELK plugins.  ELK is in major catch-up mode here and adopting ELK means lots of DIY development to extend it across the enterprise.
To our knowledge there are no financial services app specialists for ELK and very few, if any, in most other industry verticals.  The ELK community is growing fast so the landscape of specialists will change, but for now Splunk has substantially more penetration across industry verticals.

bee-social