Skip to content

👾 🇨🇭 Games of Switzerland - Drupal Backend Application

License

Notifications You must be signed in to change notification settings

Games-of-Switzerland/api.swissgamesgarden

Repository files navigation

🎮👾 Swiss Games Garden

Swiss Games Garden API project is based on 💦 Drupal, 🕸 Json:API and 🥃 Gin as Admin UI. We built it around 🔍 Elasticsearch to expose Search Engine capabilities. It uses 🐳 Docker for running. We use 📝 Swagger for documentation and ✅ PHPUnit/Behat for testing. We deploy with 🚀 Capistrano and mange our dependencies with 🎶 Composer.

We made it with 💗.

Build Status Swagger Issues Activity
Continuous Integration & Continuous Deployment Swagger GitHub issues GitHub last commit

🔧 Prerequisites

First of all, you need to have the following tools installed globally on your environment:

  • docker
  • composer
  • drush

don't forget to add bins to your path such:

  • php
  • mysql

🐳 Docker Install

Project setup

cp docker-compose.override-example.yml docker-compose.override.yml

Update any values as needed, example when you already use the 8080 port:

services:
  # Drupal development server
  dev:
    hostname: dev
    ports:
      - "8082:80"

Another example when you already have a local MySQL server using port 3306:

# Database
db:
  ports:
    - "13306:3306"

Project bootstrap

docker-compose build --pull
docker-compose up --build -d
docker-compose exec app docker-as-drupal bootstrap --with-default-content --with-elasticsearch
(get a coffee, this will take some time...)
docker-compose exec app drush eshs
docker-compose exec app drush eshr
docker-compose exec app drush queue-run elasticsearch_helper_indexing

Project setup

Once the project up and running via Docker, you may need to setup some configurations in the web/sites/default/setting.php.

Project base URL

As we are working in a decoupled architecture, we need to set the Website URL.

/**
 * Base URL of the Next App.
 *
 * This value should not contain a leading slash (/).
 *
 * @var string
 */
$config['frontend']['base_url'] = 'https://swissgames.garden';

Sitemap

The base URL of sitemap links can be overridden using the following settings.

/**
 * The base URL of sitemap links can be overridden here.
 *
 * @var string
 */
$config['simple_sitemap.settings']['base_url'] = 'https://api.swissgames.garden';

Symfony Mailer, Sendmail & Mailcatcher

We use Symfony Mailer to manager the Mail Transport. For this project, Gandi provide us a SMTP server.

/**
 * The Symfony Mailer transporter.
 *
 * @var string
 */
$config['symfony_mailer.settings']['default_transport'] = 'smtp_gandi';
$config['symfony_mailer.mailer_transport.gandi_smtp']['configuration']['user'] = 'dev@swissgames.garden';
$config['symfony_mailer.mailer_transport.gandi_smtp']['configuration']['pass'] = '';

For local development, we use mailcatcher as a fake SMTP server. Mailcatcher will prevent mail to be sent and expose them through a Web UI on http://localhost:1080.

/**
 * The Symfony Mailer transporter.
 *
 * @var string
 */
$config['symfony_mailer.settings']['default_transport'] = 'smtp';
$config['symfony_mailer.mailer_transport.smtp']['configuration']['host'] = 'mailcatcher';
$config['symfony_mailer.mailer_transport.smtp']['configuration']['port'] = '1025';

CND

We use an "Origin Pull CDNs" via https://api.swissgames.garden. This CDN will be used for every static-content excepted js & css. Obviously, you need to override this URL or disable the CDN for you local env.

/**
 * The CDN static-content status.
 *
 * @var boolean
 */
$config['cdn.settings']['status'] = false;

By default, we decide to disable the CDN for development process, as the host port may vary by developers and therefore the mapping.domain may change.

/**
 * The CDN mapping domain to be used for static-content.
 *
 * @var string
 */
$config['cdn.settings']['mapping']['domain'] = 'api.swissgames.garden';

Elasticsearch prefix

We use only 1 Elasticsearch server for both Production & Staging environments. Doing so, we need to separate our indexes by name. We decide to use prefixes to achieve this goal.

/**
 * Setting used to add a prefix for ES index based on the environment.
 */
$settings['gos_elasticsearch.index_prefix'] = 'local';

When it's not the first time

docker-compose build --pull
docker-compose up --build -d
docker-compose exec app drush cr (or any other drush command you need)
docker-compose exec app docker-as-drupal db-reset --with-default-content
docker-compose exec app drush eshr
docker-compose exec app drush queue-run elasticsearch_helper_indexing

(optional) Get the productions images

bundle exec cap production files:download

Docker help

docker-compose exec app docker-as-drupal --help

🚔 Static Analyzers

You can read more about it in our CONTRIBUTING section.

After a git pull/merge

docker-compose down
docker-compose build --pull
docker-compose up --build -d
docker-compose exec app docker-as-drupal db-reset --with-default-content --with-elasticsearch

Prepend every command with docker-compose exec app to run them on the Docker environment.

🚀 Deploy

First time

# You need to have ruby & bundler installed
$ bundle install

Each times

We use Capistrano to deploy:

bundle exec cap -T
bundle exec cap staging deploy

🔍 Elasticsearch

All given port may be changed by your own docker-compose.override.yml.

The Docker installation ship with a working Elasticsearch in version 6.8.5.

You may browse your ES server by using DejaVu UI.

  1. Open DejaVu in your local browser http://localhost:1358/

  2. Connect to your Elasticsearch instance using http://localhost:19200 on index real_estate.

    Example working link: http://localhost:1358/?appname=development_gos_node_game_en&url=http://localhost:19200

    The local machine port is the one defined in your docker-compose or docker-compose.override.yml. In the following example the local port is 19200. and the port inside the Docker is 9200.

      elasticsearch:
        ports:
          - "19200:9200"

Index

docker-compose exec [app|test] drush eshr
docker-compose exec [app|test] drush queue-run elasticsearch_helper_indexing

List of Indexes

docker-compose exec elasticsearch curl http://127.0.0.1:9200/_cat/indices

This should print

$ yellow open gos lsSuUuMjTyizjL_WLECfyQ 5 1 0 0 1.2kb 1.2kb

Recreate Index from scratch

This operation is necessary when the Elasticsearch schema has been updated.

    docker-compose exec app drush eshd -y
    docker-compose exec app drush eshs

Health Check

Check that Elasticsearch is up and running.

docker-compose exec elasticsearch curl http://127.0.0.1:9200/_cat/health

📋 Documentations

We use Swagger to document our custom REST endpoints.

Expects the swagger.json file it to be stored ìn ./swagger/swagger.json. You may access to the staging or production REST specification with those links:

Customs modules:

🚑 Troubleshootings

Error while running Elasticsearch Setup ?

  No alive nodes found in your cluster

It seems your Elasticsearch cluster is not reachable by the Docker container.

The common mistake is a misconfiguration on docker-compose.yml with missing host:

DRUPAL_CONFIG_SET: >-
    elasticsearch_helper.settings elasticsearch_helper.host elasticsearch

Run the diagnostic command to show the value of elasticsearch_helper.host on your container:

docker-compose exec app drush cget elasticsearch_helper.settings --include-overridden

It should print:

elasticsearch_helper:
  scheme: http
  host: elasticsearch
  port: 9200
  authentication: 0
  user: ''
  password: ''
  defer_indexing: 0

If you get something else in host (such as localhost), then your initial bootstrap was made without the host config key and need to be rerun:

docker-compose exec app docker-as-drupal db-reset --update-dump --with-default-content

Elasticsearch indexing failed with error FORBIDDEN/12/index read-only / allow delete ?

By default, Elasticsearch installed goes into `read-only mode when you have less than 5% of free disk space.

First, you will need to remove all documents and indices from Elasticsearch (or change the disk size).

docker-compose exec elasticsearch curl -X DELETE http://127.0.0.1:9200/_all

Then you can fix it by running the following commands:

docker-compose exec elasticsearch curl -XPUT -H "Content-Type: application/json" http://127.0.0.1:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'
docker-compose exec elasticsearch curl -XPUT -H "Content-Type: application/json" http://127.0.0.1:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'
docker-compose exec elasticsearch curl -XPUT -H "Content-Type: application/json" http://127.0.0.1:9200/_cluster/settings -d '{ "transient": { "cluster.routing.allocation.disk.threshold_enabled": false } }'

Error while importing config ?

The import failed due for the following reasons:                                                                                                   [error]
Entities exist of type <em class="placeholder">Shortcut link</em> and <em class="placeholder"></em> <em class="placeholder">Default</em>. These
entities need to be deleted before importing.

Solution 1: Delete all your shortcuts from the Drupal Admin on admin/config/user-interface/shortcut/manage/default/customize.

Solution 2: Delete all your shortcuts with drush

drush ev '\Drupal::entityManager()->getStorage("shortcut_set")->load("default")->delete();'

How to disable the Drupal Cache for dev ?

The tricks is to add this two lines in your settings.php:

// do this only after you have installed the drupal
$settings['container_yamls'][] = DRUPAL_ROOT . '/sites/development.services.yml';
$settings['cache']['bins']['render'] = 'cache.backend.null';

A better way is to use the example.settings.local.php that do more for your dev environement (think about it like the app_dev.php front controller):

  1. Copy the example local file:

    cp sites/example.settings.local.php sites/default/settings.local.php
  2. Uncomment the following line in your settings.php

    if (file_exists(__DIR__ . '/settings.local.php')) {
      include __DIR__ . '/settings.local.php';
    }
  3. Clear the cache

    drush cr

How to enable the Twig Debug for dev ?

  1. Copy the example local file:

    cp sites/default/default.services.yml sites/default/services.yml
  2. set the debug value of twig to true

    twig.config:
      debug: true
  3. Clear the cache

    drush cr

Read More about it

Trouble when running coding standard validations

ERROR: the "Drupal" coding standard is not installed. The installed coding standards are MySource, PEAR, PHPCS, PSR1, PSR2, Squiz and Zend

You have to register the Drupal and DrupalPractice Standard with PHPCS:

./vendor/bin/phpcs --config-set installed_paths [absolute-path-to-vendor]/drupal/coder/coder_sniffer

🏆 Tests

Every tests should be run into the Docker environment.

  1. Run a shell on your Docker test env.
docker-compose exec test bash
  1. Once connected via ssh on your Docker test, you may run any docker-as-drupal commands
docker-as-drupal [behat|phpunit|nightwatch]

You also may use the direct access - without opening a bash on the Docket test env. using:

docker-compose exec test docker-as-drupal [behat|phpunit|nightwatch]

💻 Drush Commands

🕙 Crons

# Drupal - Production
# ----------------
## Every 5 minutes
*/5 * * * * root /var/www/docker/cron.sh 2>&1

Crontab

📢 RSS

📈 Monitoring

New Relic

New Relic requires two components to work: the PHP agent (inside our app container) and a daemon (another container), which aggregates data sent from one or more agents and sends it to New Relic.

By default, we removed the New Relic Docker Container ARGS and depends_on to avoid building extra containers for developers. Therefore, on Staging & Production docker-compose.override.yml we have added thoses extra parameters

build:
  context: .
  args:
    - 'NEW_RELIC_AGENT_VERSION=10.16.0.5'
    - 'NEW_RELIC_LICENSE_KEY=LICENSE'
    - 'NEW_RELIC_APPNAME=Games of Switzerland'
    - 'NEW_RELIC_DAEMON_ADDRESS=newrelic-apm-daemon:31339'
depends_on:
    - newrelic-apm-daemon

You also may add the API Key in settings.php (on staging / production) to enable data-collection of contrib module `new_relic_rpm

$config['new_relic_rpm.settings']['api_key'] = 'YOUR_API_KEY';

Authors

👨‍💻 Kevin Wenger

👨‍💻 Toni Fisler

👩‍💻 Camille Létang

👨‍💻 Pierre Georges

🤝 Contributing

Contributions, issues and feature requests are welcome!

Feel free to check issues page.