Randomly Expressed

About

Welcome to my blog “randomly expressed”. I created this website to publish helpful tips. It’s mainly technology driven, but I will blog about other topics. I am a Unix sysadmin that is always looking to learn new things. My goal is to be able to share knowledge that others may find useful. xkcd.com

Continue Reading »

Contact

Connect With US

Connect with us on the following social networking sites.

Most Popular Posts.

Add Some Content to This Area

You should either deactivate this panel on the Theme Settings page, or add some content via the Widgets page in your WordPress dashboard.

Install Open Shift Origin via Ansible

By on August 19, 2017 in Technology with 1 Comment

OpenShift is a computer software product from Red Hat for container-based software deployment and management. In concrete terms, it is a supported distribution of Kubernetes using Docker containers and DevOps tools for accelerated application development. OpenShift Origin provides “Platform-as-a-Service” or “PaaS”. It provides the necessary parts to quickly deploy and run a LAMP application: the web server, application server, application runtimes and libraries, database service, and so forth. Using OpenShift Origin, you can build your own PaaS.

Pre-requisites

  • OS: CentOS Linux release 7.3 (Core)
  • Disk space for nodes: 20GB minimun
  • Disk space for master: 40GB minimum
  • CPU cores for nodes: 1 core minimum
  • CPU cores for master: 2 core minimum
  • Docker installed on all nodes.
  • Ansible version >= 2.2.2.0
  • Forward and reverse DNS records.

Documentation Reference
OpenShift Installation documentation
Github OpenShift Ansible Project

Four node cluster
We will be creating a four node cluster with one master server. If you are going to run a production cluster then it’s highly recommended you use an HA setup with at least two master servers.

  1. master server
  2. node01 server
  3. nod02 server
  4. lb01 server

Docker install

1. Install Docker on all nodes.

yum install docker wget vim -y

2. Edit the /etc/sysconfig/docker file and add the following line:

INSECURE_REGISTRY='--insecure-registry 10.192.0.0/16'
  
vim /etc/sysconfig/docker

3. Restart Docker.

systemctl restart docker.service

4. Setup a DNS wildcard subdomain. This is not a requirement, so if you can’t do this then just comment out the following variable in the openshift-ansible hosts file.

openshift_master_default_subdomain=oso.dev.enron.com

Install Open Shift Origin via Ansible playbook
1. Git clone the openshift-ansible repository.

git clone https://github.com/openshift/openshift-ansible

2. Edit your inventory/byo/hosts file so he looks like this:

# Create an OSEv3 group that contains the masters and nodes groups
[OSEv3:children]
masters
nodes
etcd
lb
# Set variables common for all OSEv3 hosts
[OSEv3:vars]
# SSH user, this user should allow ssh based auth without requiring a
# password. If using ssh key based auth, then the key should be managed by an
# ssh agent.
ansible_ssh_user=root 
# If ansible_ssh_user is not root, ansible_become must be set to true and the
# user must be configured for passwordless sudo
#ansible_become=yes 
# Debug level for all OpenShift components (Defaults to 2)
debug_level=3 
# Specify the deployment type. Valid values are origin and openshift-enterprise.
deployment_type=origin
##containerized
containerized=true 
# Specify the generic release of OpenShift to install. This is used mainly just during installation, after which we
# rely on the version running on the first master. Works best for containerized installs where we can usually
# use this to lookup the latest exact version of the container images, which is the tag actually used to configure
# the cluster. For RPM installations we just verify the version detected in your configured repos matches this
# release.
openshift_release=v1.5
# Specify an exact container image tag to install or configure.
# WARNING: This value will be used for all hosts in containerized environments, even those that have another version installed.
# This could potentially trigger an upgrade and downtime, so be careful with modifying this value after the cluster is set up.
openshift_image_tag=v1.5.0
# htpasswd auth
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider', 'filename': '/etc/origin/master/htpasswd'}]
# default subdomain to use for exposed routes
openshift_master_default_subdomain=oso.dev.enron.com
# default project node selector
osm_default_node_selector='region=primary'
# Default value: 'region=infra'
openshift_hosted_router_selector='region=infra'
# based on the number of nodes matching the openshift router selector.
openshift_hosted_router_replicas=1
# Metrics deployment
# See: https://docs.openshift.com/enterprise/latest/install_config/cluster_metrics.html
#
# By default metrics are not automatically deployed, set this to enable them
openshift_hosted_metrics_deploy=true
openshift_metrics_image_version=v1.5.0
openshift_hosted_metrics_deployer_version=1.5.0
# host group for masters
[masters]
master.dev.enron.com
[etcd]
master.dev.enron.com
# NOTE: Containerized load balancer hosts are not yet supported, if using a global
# containerized=true host variable we must set to false.
[lb]
lb01.dev.enron.com containerized=false
# NOTE: Currently we require that masters be part of the SDN which requires that they also be nodes
# However, in order to ensure that your masters are not burdened with running pods you should
# make them unschedulable by adding openshift_schedulable=False any node that's also a master.
[nodes]
master.dev.enron.com openshift_node_labels="{'region': 'infra', 'zone': 'default'}" openshift_schedulable=true
n0[1:2].dev.enron.com openshift_node_labels="{'region': 'primary', 'zone': 'default'}"

3. Run the following command for Ansible to install the cluster.

ansible-playbook -i ./inventory/byo/hosts playbooks/byo/config.yml

4. Once the cluster is up and online log on to the master node and setup a user and password. I used admin for the user and for the password I used openshift.

htpasswd -b /etc/origin/master/htpasswd admin openshift
openshift admin policy add-cluster-role-to-user cluster-admin admin

5. From the master node verify the cluster is up with no errors by running the oc status command.

[root@master ~]# oc status
In project default on server https://master.dev.enron.com:8443

https://docker-registry-default.oso.dev.enron.com (passthrough) (svc/docker-registry)
  dc/docker-registry deploys docker.io/openshift/origin-docker-registry:v1.5.0
    deployment #1 deployed 2 days ago - 1 pod

svc/kubernetes - 172.30.0.1 ports 443, 53->8053, 53->8053

https://registry-console-default.oso.dev.enron.com (passthrough) (svc/registry-console)
  dc/registry-console deploys docker.io/cockpit/kubernetes:latest
    deployment #1 deployed 2 days ago - 1 pod

svc/router - 172.30.190.155 ports 80, 443, 1936
  dc/router deploys docker.io/openshift/origin-haproxy-router:v1.5.0
    deployment #1 deployed 2 days ago - 1 pod

View details with 'oc describe /' or list everything with 'oc get all'.

6. Verify your metric pods are running without any errors.

[root@master ~]# oc get pods -n openshift-infra
NAME                         READY     STATUS    RESTARTS   AGE
hawkular-cassandra-1-psfb5   1/1       Running   0          2d
hawkular-metrics-1nc75       1/1       Running   0          2d
heapster-cn4ll               1/1       Running   0          2d

7. Log on to your OpenShift Origin cluster via the browser using your master node as the URL with port 8443.

https://master.dev.enron.com:8443

If you need to uninstall the cluster and start over. Run the following command.

ansible-playbook -i ./inventory/byo/hosts playbooks/adhoc/uninstall.yml

Deleting all your pods and services in your project in order to start over.

oc delete all  --all -n test01

Facebook Comments

Tagged With: , , ,

Post a Comment

Your email address will not be published. Required fields are marked *

Top