Friday, March 3, 2017

K8 (Kube) Package Manager - Helm

I see helm as analogous to  AWS Cloud Formation

https://github.com/kubernetes/helm

Helm is the package manager for kubenetes. It in-turn uses kubenetes charts .

Helm is a tool for managing Kubernetes charts. Charts are packages of pre-configured Kubernetes resources.
Use Helm to...


  • Find and use popular software packaged as Kubernetes charts
  • Share your own applications as Kubernetes charts
  • Create reproducible builds of your Kubernetes applications
  • Intelligently manage your Kubernetes manifest files
  • Manage releases of Helm packages


Helm is the tool to load charts into K8 for application deployment

Thursday, March 2, 2017

WTF is "Open Source Core"

I recently came across this term and it sounds cool and practical to me.
Thought of sharing it here, as i suddenly realized i have this in my screen all over...

Example : Look at https://www.influxdata.com/products/ , formerly influx DB.

They have real good products (influxcloud, influxEnterprise, TICK stack), inturn all of these are just one "A Real-Time Monitoring Solution"

In data center we host tons of application and we need a true solution for monitoring, there are tons of monitoring solutions datadog, new relic, etc... but all are closed products.

The power of "Open Source Core" is the whole complex product is open sourced ( githubbed )
, but the niche feature to take it to enterprise or integrate will be the monetizing way.

That is , in this example Influx Enterprise is the Monetizing product.

Another example : Hashicorp
https://www.hashicorp.com/

Famously called "Hashi-stack", open sourced all their core complex products.
Vagrant, Packer, Vault, Consul, Terraform. But their enterprise version and another niche product atlas is their monetizing product.

https://overcast.fm/+HZUc5QQ4k/35:07

Mesosphere DC/OS installation

I am playing around with DC/OS and tried to install it on GCE ( Google Cloud Engine )

I blindly followed https://dcos.io/docs/1.8/administration/installing/cloud/gce/#configure

and saw stars at the end ( well, spinning around my head ) .

Not so good documentation, end result : "nothing works"

I was hoping i will improve this documentation with my experience of installing in GCE, but i failed miserably , could not achieve what i want to do.

Ok, what really happened

I followed the instruction in DC/OS documentation but could not get the DC/OS up and running.

1. https://dcos.io/docs/1.8/administration/installing/cloud/gce/#bootstrap :
Easy and got this working
2. https://dcos.io/docs/1.8/administration/installing/cloud/gce/#install :
These are ansible powered and calls GCE API to provision a master0 node and then agent-nodes, all works fine with slight tweek to ansible, it did not run the first time as documented.
3. https://dcos.io/docs/1.8/administration/installing/cloud/gce/#configure : Failed

I wanted to be sure if i really installed DC/OS stack and was i successful?
I could not see the mesos, marathon or DCOS. Err:FAILED.

All i have done so far is bring up 3 VMs in my GCE.

I wish this document gets better and a cleaner support ....


BTW : A good pod to listen to https://overcast.fm/+I_rT72mY/41:07

Wednesday, March 1, 2017

How to run graylog cluster in GKE(Google Container Engine)

With reference to another article i wrote earlier on how to run graylog cluster in kubenetes here ( http://beeyeas.blogspot.com/2017/02/how-to-run-graylog-cluster-in-kubernetes.html) , it was the way to run graylog ( mongo, elastic and graylog server) to-gether in one single container instance in minikube.

Here i take another attempt to run micro-services container of mongodb, elastic-search and graylog2 in google cloud platform - google container engine( GKE )

Step1 : See here (http://beeyeas.blogspot.com/2017/02/gke-google-container-engine.html) , how to bring up GKE ( kubenetes instance )

I assume, you followed the instructions here correctly and have "kubectl" CLI configured correctly to point to your gcloud GKE instance.
Make sure you can access kubenetes dashboard , for which you have to proxy service 

#kubectl proxy

Step 2 : get the kubenetes service and deployment files here ( https://github.com/beeyeas/graylog-kube ), clone repo

Step 3 : Create mongo, elasticsearch, graylog
#kubectl create -f mongo-service.yaml,mongo-deployment.yaml
#kubectl create -f elasticsearch-deployment.yaml,elasticsearch-service.yaml
#kubectl create -f graylog-deployment.yaml,graylog-service.yaml

Step 4 : forward graylog UI to local 19000 port
#kubectl port-forward graylog-2041601814-5qnbc 9000:9000

Step 5 : Verify if all services are up
#kubectl get services
NAME            CLUSTER-IP    EXTERNAL-IP   PORT(S)              AGE
elasticsearch   None          <none>        55555/TCP            1h
graylog         10.3.248.24   <none>        9000/TCP,12201/TCP   1h
kubernetes      10.3.240.1    <none>        443/TCP              1d

mongo           None          <none>        55555/TCP            1h

NOTE: Access localhost:9000 for graylog UI

Step 6 : Graylog UI is empty and do not have any logs to index or show , i have exposed port number 12201. Create a port forward for graylog input GELF HTTP
Make sure you got this screen configured in graylog, configurig GELF-HTTP in graylog



#kubectl port-forward graylog-2041601814-5qnbc 12201:12201

Step 7 : Now we can pump some test log statements into graylog from localhost

for i in {1..100000}; do curl -XPOST 127.0.0.1:12201/gelf -p0 -d '{"short_message":"Hello there", "host":"example.org", "facility":"test", "_foo":"bar"}'; done


After step 7 where you pump the logs, see if logs are showing up in the graylog UI








bee-social