Skip to content
This repository has been archived by the owner on Jun 8, 2021. It is now read-only.

A memory monitoring system with gnocchi database developed for the course of Cloud Computing of the MSc AIDE at the University of Pisa.

License

Notifications You must be signed in to change notification settings

seraogianluca/memory-monitoring

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

29 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Memory Monitoring

CodeFactor

What is this repository?

It is a project for the Cloud Computing course of the MSc Artificial Intelligence and Data Engineering at University of Pisa.

What does this project is supposed to do?

The project should periodically retrieve the memory usage of remote machines and save the data into an instance of Gnocchi database, moreover it should retrieve and show such values from the database.

The project is composed by a producer and a consumer. The producer reads the data from the machines using ssh connections and save the data retreived on Gnocchi. The consumer reads and show the data retreived from Gnocchi.

Architecture

Run our paramount project by yourself!

To run this project, first you need a cloud infrastructure with Openstack and Gnocchi installed on top. Then, you need Docker to run both producer and consumer in containers.

Copy the consumer and producer directories to the machine where docker is installed and connect to it. Directories must have the following structure:

root@host-name:~# ls
consumer  producer 

producer/ directory:

root@host-name:~/producer# ls
config.json  producer.py  Dockerfile requirements.txt

consumer/ directory:

root@host-name:~/consumer# ls
config.json  consumer.py  Dockerfile requirements.txt

The config.jsonfile is the same and must be in both the directories. Change the authentication parameters and the Gnocchi url in both producer and consumer scripts.

Producer

Build the customized image using the Dockerfile. Run the following commands inside the producer directory.

docker build -t producer .

Run the container in background with the -d option:

 docker run -d producer

or if you want to see the producer output in realtime run the container in foreground with the -it option:

 docker run -it producer

Every 30 seconds, for every machine, the producer makes three requests, one every 4 seconds.

Consumer

Build the customized image of the consumer using the Dockerfile. Run the following commands inside the producer directory.

docker build -t consumer .

Since the consumer is an interactive script it must run in foreground to choose the aggregation method and granularity to show. The consumer will read the periodic updates of the producer every 30 seconds.

docker run -it consumer

Example of execution:

root@host-name:~/consumer# docker run -it consumer
Please, chose a kind of aggregation:

1) Mean
2) Min
3) Max
1
Please, chose a granularity:

1) Minute
2) Hour
1

Host: 172.0.0.1

+---------------------------+-------------+--------------------+
|         Timestamp         | Granularity |        MEAN        |
+---------------------------+-------------+--------------------+
| 2020-07-07T19:20:00+00:00 |     60.0    |       35.89        |
| 2020-07-07T19:21:00+00:00 |     60.0    |       35.87        |
| 2020-07-07T19:22:00+00:00 |     60.0    |       35.88        |
| 2020-07-07T19:23:00+00:00 |     60.0    |       35.88        |
| 2020-07-07T19:24:00+00:00 |     60.0    |       36.16        |
+---------------------------+-------------+--------------------+

# ... other machines results

Useful commands:

List all the containers:

docker ps -a

Stop all the containers:

docker stop $(docker ps -a -q)

Remove all the containers:

docker rm $(docker ps -a -q)

List all the images:

docker images -a

Remove all the images (add -f option to force the action):

docker rmi $(docker images -a -q)

One more thing...

We use one metric for each machine on Gnocchi. For each metric use the medium archive policy that aggregates the data with two different granularities:

  • 1 minute granularity over 7 days
  • 1 hour granularity over 365 days

We don't use a specific aggregation because our consumer retrieves the data using mean, min and max.

To let the application work the config.json should be properly filled. The config file should include a list of the hosts for which the memory monitoring is performed with the respective metric-id on Gnocchi.

{
    "hosts": [
        {
            "ip": "machine-ip",
            "user": "machine-user",
            "password": "machine-password",
            "metric": "gnocchi-metric-id"
        }
    ]
}

Don't worry! We created a script to create a metric for each machine in the database. Just change the Gnocchi ip in the script, fill a config.json file with the ip, username and password of the machine you want to monitor and run it.

Credits

@thorongil05, @ragnar1002, @matildao-pane, @seraogianluca

About

A memory monitoring system with gnocchi database developed for the course of Cloud Computing of the MSc AIDE at the University of Pisa.

Topics

Resources

License

Stars

Watchers

Forks