Zenoss Core 5 Install Guide for AWS EC2 Instances

6 posts / 0 new
Last post
Mrvambox
Mrvambox's picture
Offline
Last seen: 1 year 3 days ago
Joined: 12/18/2015 - 12:47
Posts: 44
Zenoss Core 5 Install Guide for AWS EC2 Instances

Before I post this guide I would like to thank everyone on the forums, especially Jan, for helping me deploy Zenoss on my own AWS instance. This was a big learning process for me and will continue to be an exciting learning process, but I could not have made it this far without support from everyone on here. With that being said, I have compiled instructions that I used to deploy Zenoss Core 5 to monitor a Linux server over SNMP. 

Zenoss Core 5 Auto Install Guide for AWS EC2 Instances

 

The first thing to do is to create a server instance on AWS. Zenoss requires that the server's root filesystem have at least 30GB of space, that the server have at least 4 CPUS, and that the server have a minimum of 20GB of ram. It also requires Centos 7.x, Red Hat Enterprise Linux 7.x, or Ubuntu 14 to run. I will show how I chose to install.

 

AMI – Red Hat Enterprise Linux 7.2

Instance Type – m4.2xlarge *It has 8 CPUs, and 32GB memory*

Configure Instance – Leave Alone

Add Storage – Change root to have 40GB to be generous.

Add 4 additional volumes of 40 GB. *This is being generous

Tag Instance – I tagged ours Zenoss for key and left value blank

Configure Security Group – Select an existing group and choose the one with name default.

This will allow traffic from all Ips / ports in and out.

You can change this later.

Review and Launch – Make sure everything is in place and you're good to go.

When you launch you will be prompted to create a key pair for your server.

Make sure you download your key!

 

View your instance and give it a name. I called mine Test. Then go to Elastic Ips in the right hand column. Allocate a new address and then associate that new address with your new server instance.

 

Now wait a couple of minutes while the server is initialized.

 

Once the server is initialized you can SSH into it.

 

# ssh -i Zenoss.pem ec2-user@<yourIPaddress>

Once inside give yourself root privileges:

# sudo su

 

The auto deploy instructions are:

# cd /tmp

# curl -O https://raw.githubusercontent.com/monitoringartist/zenoss5-core-autodeploy/master/core-autodeploy.sh

# chmod +x core-autodeploy.sh

 

Now do an fdisk -l to see all your drives. I found that I have these – xvda, xvdb, xvdc, xvdd, xvde

xvda should be your root drive and you will see immediately after it is listed how it is partitioned. Do not pass your boot drive to the autodeploy script. Your next command should look like:

 

# ./core-autodeploy.sh -d /dev/xvdb -s /dev/xvdc -v /dev/xvdd -b /dev/xvde

 

It will then ask a series of questions, answer yes to all.

The script takes about 20-30 minutes to complete. Monitor its status to watch for errors.

Most likely you will see there is a problem with installing the Percona Toolkit. I have been told this is not a big deal by the project developers and not required for Zenoss 5.

 

 

When the script finishes it will share a completed notice along with some additional instructions including adding a line to your host file. This means the host file on whatever machine you will be accessing Zenoss Core from.

 

Paths for the host file are as follows:

Mac/Linux/Unix : /etc/hosts

Windows : C:\Windows\System32\Drivers\etc\hosts 

You should be adding a line in that is similar to this:

 

255.22.0.90 ip-255-31-58-222.ec2.internal hbase.ip-255-31-58-222.ec2.internal opentsdb.ip-255-31-58-222.ec2.internal rabbitmq.ip-255-31-58-222.ec2.internal zenoss5.ip-255-31-58-222.ec2.internal

 

***IP addresses listed here are fake for obvious reasons. Yours should match the IP / hostname of your own server.

 

Next reboot your VM and log back in. It may take a couple minutes before you can log back on.

 

Before we log onto the Control Center for Zenoss, we have a couple more things to do at the command line. First is adding a new user. Note that ccuser was create during the autodeploy script. I will instead add an user.

 

Access root privileges and add an user.

 

sudo su

adduser zenoss

passwd zenoss ← set a password you will remember

 

Now we need to enable that user to access the control center.

 

Add the user to the default admin group

sudo usermod -aG wheel zenoss

Create a variable for the group to designate as the admin group

GROUP=serviced

Create this new group.

sudo groupadd ${GROUP}

Add an user to the group

sudo usermod -aG ${GROUP} zenoss

Change the value of the SERVICED_ADMIN_GROUP in /etc/default/serviced

EXT=$(date +"%j-%H%M%S")

test ! -z "${GROUP}" && sudo sed -i.${EXT} -e \ 's|^#[^S]*\(SERVICED_ADMIN_GROUP=\).*$|\1'${GROUP}'|' \ /etc/default/serviced || \ echo "** GROUP undefined; no edit performed"

Restart Control Center

systemctl stop serviced && systemctl start serviced

 

This will take a minute. You should now be able to access Control Center as user zenoss with your own defined password. Your next step is to start Zenoss Core. Once you see its status become a checkmark you will be able to accesss Zenoss Core by clicking on the virtual host name for it. This will take a minute. Don't worry if you see the red exclamation points at first, they should resolve. You can now go in and set up your Zenoss Core application.

 

 

 

Setup for SNMP Monitoring w/ Zenoss – Linux Server

 

In order to monitor other servers over SNMP, you must install SNMP and its utilities and libraries on the server you wish to monitor and make a slight edit to the configuration file.

 

To install:

 

yum install -y net-snmp

yum install -y net-snmp-utils

yum install -y net-snmp-libs

 

Now navigate to the /etc/snmp

 

cd /etc/snmp

 

Now we need to make two slight edits to the snmpd.conf file.

First make a backup:

 

cp snmpd.conf snmpd.conf.bak

 

Now we need to edit the file using a text editor. I will use vim.

 

vim snmpd.conf

 

Now find this part of the file and make the changes seen below:

Before:

 

# Make at least snmpwalk -v 1 localhost -c public system fast again.

# name incl/excl subtree mask(optional)

view systemview included .1.3.6.1.2.1.1

view systemview included .1.3.6.1.2.1.25.1.1

 

After:

 

# Make at least snmpwalk -v 1 localhost -c public system fast again.

# name incl/excl subtree mask(optional)

view systemview included .1

 

Also,

Before:

 

syslocation Unknown (edit /etc/snmp/snmpd.conf)

syscontact Root <root@localhost> (configure /etc/snmp/snmp.local.conf)

 

After:

 

syslocation AWS - Test Server - *Name*

syscontact *Name* <*username*@*hostname*>

Anything in the ** change to fit your own information.

Before we do anything else, we need to turn the SNMP service on.

 

systemctl start snmpd.service

 

If you want, configure the service to start automatically at boot time:

 

chkconfig snmpd on

 

You can check to make sure SNMP is running with an snmpwalk:

 

snmpwalk -v2c -c public localhost system

 

Now go to your Zenoss Core application in your browser. Go to Infrastructure and click the little computer monitor with the plus sign to add a single device. Change the IP name to that of your server. Set the device class to /Server/Linux, give your device a title and then click more... and change SNMP community to public. Then click Add.

 

 

You can now look at the device through Zenoss Core and monitor it. It will take up to 15 minutes before you can see graphical information about the service. 

Jan.garaj
Jan.garaj's picture
Offline
Last seen: 3 months 3 weeks ago
Joined: 04/20/2014 - 16:23
Posts: 431
Great. Could you create wiki

Great. Could you create wiki page please? http://wiki.zenoss.org/Main_Page

Notes:
- you really don't need 4 drives, partitioning depends on your preferences - the minimal case requires 2 disks - 1st disk for xfs FS and 2nd one for btrfs FS

- you can edit autodeploy script - variable user (https://github.com/monitoringartist/zenoss5-core-autodeploy/blob/master/core-autodeploy.sh#L41) and your prefered zenoss user will be created instead of ccuser

- if you are using AWS, you can use AWS DNS service - Route 53 and then you don't need to play with hosts file - ownership of some domain will be required

Harish
Harish's picture
Offline
Last seen: 1 week 4 days ago
Joined: 04/12/2017 - 08:58
Posts: 3
Getting Error with autodeploy script

add as reply

Harish
Harish's picture
Offline
Last seen: 1 week 4 days ago
Joined: 04/12/2017 - 08:58
Posts: 3
Getting Error with autodeploy script

Hi All,

I am getting below error while installing zenoss 5 through autodeploy script.

initially i got an error with docker verison.I have changed  the docker version from 1.9.0 to 1.12.1 after that I was stuck with below error.
 

I am new to zenoss5 and docker.
 

your help will be highly appreciated!!!

 

=====

systemctl enable docker && systemctl start docker
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
Problem with starting of Docker
[root@ip-172-31-33-25 tmp]#
[root@ip-172-31-33-25 tmp]#
[root@ip-172-31-33-25 tmp]#
[root@ip-172-31-33-25 tmp]#
[root@ip-172-31-33-25 tmp]#
[root@ip-172-31-33-25 tmp]# systemctl start docker.service
Job for docker.service failed because the control process exited with error code. See "systemctl status docker.service" and "journalctl -xe" for details.
[root@ip-172-31-33-25 tmp]# systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled)
  Drop-In: /etc/systemd/system/docker.service.d
           └─docker.conf
   Active: failed (Result: exit-code) since Wed 2017-04-12 10:24:01 UTC; 18s ago
     Docs: https://docs.docker.com
  Process: 12762 ExecStart=/usr/bin/docker daemon $OPTIONS -H fd:// (code=exited, status=1/FAILURE)
 Main PID: 12762 (code=exited, status=1/FAILURE)

 

Apr 12 10:24:01 ip-172-31-33-25.ec2.internal systemd[1]: Starting Docker Application Container Engine...
Apr 12 10:24:01 ip-172-31-33-25.ec2.internal docker[12762]: Warning: '-dns' is deprecated, it will be replaced by '--dns' soon. See usage.
Apr 12 10:24:01 ip-172-31-33-25.ec2.internal docker[12762]: time="2017-04-12T10:24:01.474372062Z" level=fatal msg="no sockets found via soc...stemd"
Apr 12 10:24:01 ip-172-31-33-25.ec2.internal systemd[1]: docker.service: main process exited, code=exited, status=1/FAILURE
Apr 12 10:24:01 ip-172-31-33-25.ec2.internal systemd[1]: Failed to start Docker Application Container Engine.
Apr 12 10:24:01 ip-172-31-33-25.ec2.internal systemd[1]: Unit docker.service entered failed state.
Apr 12 10:24:01 ip-172-31-33-25.ec2.internal systemd[1]: docker.service failed.
Hint: Some lines were ellipsized, use -l to show in full.

Mad-13063
Mad-13063's picture
Offline
Last seen: 1 week 6 days ago
Joined: 04/08/2017 - 10:00
Posts: 2
Hi

Hi

You probably have an issue with the -H option passed to docker.

 

   Process: 12762 ExecStart=/usr/bin/docker daemon $OPTIONS -H fd:// (code=exited, status=1/FAILURE)
 Main PID: 12762 (code=exited, status=1/FAILURE)

Your config:

Red Hat Enterprise Linux 7.2

Docker 1.12.1

From https://github.com/docker/docker/issues/22847

Docker 1.12 on CentOS no longer uses socket activation, so you need to remove the -H fd:// from the systemd unit file.

Probably your file would be /etc/systemd/system/docker.service.d/docker.conf

 

Cheers

 

 

Harish
Harish's picture
Offline
Last seen: 1 week 4 days ago
Joined: 04/12/2017 - 08:58
Posts: 3
Thank you I will try that.  

Thank you I will try that.

 

Log in to post comments