Browse Source

update README.

unstable
Craig Oates 4 years ago
parent
commit
955cb55941
  1. 45
      README.md

45
README.md

@ -1,3 +1,44 @@
# skivvy
# Skivvy
A Python program which parses the data in the coblob database and transforms into a format which the co-data project can use. One of the main goals of this project is to reduce the load on the CPU in the co-data project.
This is Python program which parses the data in the coblob database and transforms into a format which the co-data project can use. One of the main goals of this project is to reduce the load on the CPU in the co-data project.
## Quick Start
1. `python3 -m venv venv`
2. `. venv/bin/activate`
3. `pip install -r requirements.txt`
To run the program, run the following command (assuming you are in the project's root directory,
```bash
# -v is for verbose output. Remove in not wanted.
# -t (target) is the directory you want the data to be save at.
# -t is required.
python app/main.py -t save/data/location/path -v
```
## Architecture Overview
The program itself is situated in the `app` folder. The access point is `main.py` and the bulk of the work is shared between the code in the `coordinators` and `services` directories.
```
# The architecture's (layered) flow.
Input -> main.py -> coordinators -> services
|
Output <- main.py <- coordinators <- services
```
You should not need to touch much of the code in `main.py`, its main focus is on stating the programs tasks at a high level. The calls in `main.py` are passed on to the `coordinators` layer which then makes the necessary function calls into `services` to reach the desired result stated in `main.py`. The flow of the code is rigid, `main.py` does not interact with the `services` layer directly. It goes through the `coordinators` layer and the same thing applies to the code in the `services` layer (can't touch `main`).
For the list of requirements for this project, please view the `requirements.txt` file in the project's root directory.
## Note About Intended Usage of This Project
While the program can be executed as a standalone thing. Its main reason for existing is to reduce the C.P.U. load on the [co-data](https://git.abbether.net/craig.oates/co-data) project. The way it does this is by having this run as a cron job once a day and use the results from the job to build the charts for the co-data website (for that day). The data this program transforms is called/generated from the [co-api](https://git.abbether.net/craig.oates/co-api) project. The data needs to be transformed because it is not usable in its raw (REST-API) form when called directly from the co-api project.
The rate of change with the (co-api) data is what brought about the decision to make this program. The rate is very slow and it is unnecessary for the server to constantly transform the data with every request it receives. This program acts as a cache for co-data to use. The reduction is data transformation, also, reduces the load on the C.P.U. at the time of a web request.
Debian (or Debian based) operating systems are the intended systems for this program to run on. To set the cron job for these systems use `crontab -e`.
When the file is open, enter the following to make this program run once a day at 6 A.M. `0 06 * * * /path/to/venv/python /path/to/project/app/main.py`. Do not forget to change the paths before saving the file.
For the sake of clarity, make sure this program is on the same computer (local network at least) as the co-data project. It needs the data otherwise it will not run as intended.