Kafka on docker-compose

Kafka setup, create topic, send message and receive messages

https://www.conduktor.io/kafka/kafka-topics-cli-tutorial/

				
					
# --------------
# docker-compose.yml
# --------------
version: '2'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000
    ports:
      - 22181:2181
  
  kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper
    ports:
      - 29092:29092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092,PLAINTEXT_HOST://localhost:29092
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
# --------------

## Create topic
# -------------
kafka-topics --bootstrap-server localhost:29092 --topic firstTopi1 --create --partitions 3 --replication-factor 1
# Created topic firstTopi1.


## List topics
# -------------
kafka-topics --bootstrap-server localhost:29092 --list
# firstTopi1
# first_topic
# test_topic


## Describe topic
# -------------
kafka-topics --bootstrap-server localhost:29092 --describe --topic first_topic
# Topic: first_topic      TopicId: VwJGV8HeSXa1NjmZ5Hfvxw PartitionCount: 3       ReplicationFactor: 1    Configs: 
#         Topic: first_topic      Partition: 0    Leader: 1       Replicas: 1     Isr: 1
#         Topic: first_topic      Partition: 1    Leader: 1       Replicas: 1     Isr: 1
#         Topic: first_topic      Partition: 2    Leader: 1       Replicas: 1     Isr: 1

## Delete topic
# -------------
kafka-topics --bootstrap-server localhost:29092 --delete --topic first_topic


## Produce message
# -------------
kafka-console-producer --bootstrap-server localhost:29092 --topic test_topic
>Hello World
>abc

# Consume latest message
# -------------
kafka-console-consumer --bootstrap-server localhost:29092 --topic test_topic
# -----
Hello World
abc

# Consume messages from begining
# -------------
kafka-console-consumer --bootstrap-server localhost:29092 --topic test_topic --from-beginning
# -----
hi
Hello
Hi
How 
are
you
hi
hello
sdf
				
			

HDFS docker-compose

https://faun.pub/run-your-first-big-data-project-using-hadoop-and-docker-in-less-than-10-minutes-e1bbe2974ef3

				
					# kubernetes.txt
https://www.youtube.com/watch?v=o6bxo0Oeg6o&t=130s
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/
install-kubeadm/

Installing a container runtime
Install Docker Engine on Ubuntu
=============
1.Set up Docker's apt repository.

# Add Docker's official GPG key:
sudo apt-get update
sudo apt-get install ca-certificates curl gnupg
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg

# Add the repository to Apt sources:
echo \
  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \
  "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \
  sudo tee /etc/apt/sources.list.d/docker.list > /dev/null
sudo apt-get update

2. Install the Docker packages.

sudo apt-get install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin

3. Verify that the Docker Engine installation is successful by running the hello-world image.

sudo docker run hello-world

4. Install CRID for docker
---------
4.1 Install Go
4.1.1 Download tarball 
wget https://go.dev/dl/go1.21.3.linux-amd64.tar.gz

4.1.2 untar
tar -C /usr/local -xzf go1.21.3.linux-amd64.tar.gz

4.1.3 export go path
echo 'export PATH=$PATH:/usr/local/go/bin' >>~/.profile
source ~/.profile 

4.1.4 To install, on a Linux system that uses systemd, and already has Docker Engine installed

# Clone cri-dockerd
git clone https://github.com/Mirantis/cri-dockerd.git

# with non-sudo
make cri-dockerd

# Run these commands as root

cd cri-dockerd
mkdir -p /usr/local/bin
install -o root -g root -m 0755 cri-dockerd /usr/local/bin/cri-dockerd
install packaging/systemd/* /etc/systemd/system
sed -i -e 's,/usr/bin/cri-dockerd,/usr/local/bin/cri-dockerd,' /etc/systemd/system/cri-docker.service
systemctl daemon-reload
systemctl enable --now cri-docker.socket



Installing kubeadm, kubelet and kubectl
=============

1. Update the apt package index and install packages needed to use the Kubernetes apt repository:

sudo apt-get update
# apt-transport-https may be a dummy package; if so, you can skip that package
sudo apt-get install -y apt-transport-https ca-certificates curl gpg

2. Download the public signing key for the Kubernetes package repositories. 

curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg


3. Add the appropriate Kubernetes apt repository
# This overwrites any existing configuration in /etc/apt/sources.list.d/kubernetes.list
echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

4. Update the apt package index, install kubelet, kubeadm and kubectl, and pin their version:

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

Creating a cluster with kubeadm
===============

sudo kubeadm init --pod-network-cidr=10.244.0.0/16 --cri-socket=unix:///var/run/cri-dockerd.sock


Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

Alternatively, if you are the root user, you can run:

  export KUBECONFIG=/etc/kubernetes/admin.conf

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

sudo kubeadm join 10.131.187.176:6443 --token knefgg.pdafluamsim49olo --cri-socket=unix:///var/run/cri-dockerd.sock \
        --discovery-token-ca-cert-hash sha256:b058fc69cbec62d085bb38d84f0a89879cbe16068567f061a8fac84f87eab9aa
-----

kubectl get pods -A
NAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGE
kube-system   coredns-5dd5756b68-86d94               0/1     Pending   0          5m52s
kube-system   coredns-5dd5756b68-tq4v5               0/1     Pending   0          5m52s
kube-system   etcd-k8smaster-vm                      1/1     Running   0          6m5s
kube-system   kube-apiserver-k8smaster-vm            1/1     Running   0          6m8s
kube-system   kube-controller-manager-k8smaster-vm   1/1     Running   0          6m5s
kube-system   kube-proxy-kl7d8                       1/1     Running   0          5m52s
kube-system   kube-scheduler-k8smaster-vm            1/1     Running   0          6m5s

# Install flannel
https://github.com/flannel-io/flannel#deploying-flannel-manually
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.ym

# Check pods after installing flannel
 kubectl get pods -A --watch
NAMESPACE      NAME                                   READY   STATUS    RESTARTS   AGE
kube-flannel   kube-flannel-ds-7m5rq                  1/1     Running   0          54s
kube-system    coredns-5dd5756b68-86d94               1/1     Running   0          7m21s
kube-system    coredns-5dd5756b68-tq4v5               1/1     Running   0          7m21s
kube-system    etcd-k8smaster-vm                      1/1     Running   0          7m34s
kube-system    kube-apiserver-k8smaster-vm            1/1     Running   0          7m37s
kube-system    kube-controller-manager-k8smaster-vm   1/1     Running   0          7m34s
kube-system    kube-proxy-kl7d8                       1/1     Running   0          7m21s
kube-system    kube-scheduler-k8smaster-vm            1/1     Running   0          7m34s

				
			
				
					# Copy file to the hadoop container
docker cp kubernetes.txt namenode:/tmp/ 

# Get inside the hadoop container
docker exec -it namenode /bin/bash

# 1.Create the root directory for this project: 
hadoop fs -mkdir /tmp

# 2.Create the directory for the input files: 
hadoop fs -mkdir /tmp/Input

# 3.Copy the input files to the HDFS: 
hadoop fs -put /tmp/kubernetes.txt /tmp/Input

# You can open UI for HDFS at 
http://localhost:9870.
				
			
				
					# spark-shell
scala> val text = sc.textFile("hdfs://localhost:9000/tmp/Input/kubernetes.txt")
text: org.apache.spark.rdd.RDD[String] = hdfs://localhost:9000/tmp/Input/kubernetes.txt MapPartitionsRDD[3] at textFile at <console>:23

scala> text.collect;
res1: Array[String] = Array(https://www.youtube.com/watch?v=o6bxo0Oeg6o&t=130s, https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/, install-kubeadm/, "", Installing a container runtime, Install Docker Engine on Ubuntu, =============, 1.Set up Docker's apt repository., "", # Add Docker's official GPG key:, sudo apt-get update, sudo apt-get install ca-certificates curl gnupg, sudo install -m 0755 -d /etc/apt/keyrings, curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg, sudo chmod a+r /etc/apt/keyrings/docker.gpg, "", # Add the repository to Apt sources:, echo \, "  "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu ...

scala> val counts = text.flatMap(line => line.split(" "))
counts: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[4] at flatMap at <console>:23

scala> counts.collect;
res2: Array[String] = Array(https://www.youtube.com/watch?v=o6bxo0Oeg6o&t=130s, https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/, install-kubeadm/, "", Installing, a, container, runtime, Install, Docker, Engine, on, Ubuntu, =============, 1.Set, up, Docker's, apt, repository., "", #, Add, Docker's, official, GPG, key:, sudo, apt-get, update, sudo, apt-get, install, ca-certificates, curl, gnupg, sudo, install, -m, 0755, -d, /etc/apt/keyrings, curl, -fsSL, https://download.docker.com/linux/ubuntu/gpg, |, sudo, gpg, --dearmor, -o, /etc/apt/keyrings/docker.gpg, sudo, chmod, a+r, /etc/apt/keyrings/docker.gpg, "", #, Add, the, repository, to, Apt, sources:, echo, \, "", "", "deb, [arch="$(dpkg, --print-architecture)", signed-by=/etc/apt/keyrings...

scala> val mapf = counts.map(word => (word,1))
mapf: org.apache.spark.rdd.RDD[(String, Int)] = MapPartitionsRDD[5] at map at <console>:23

scala> mapf.collect
res3: Array[(String, Int)] = Array((https://www.youtube.com/watch?v=o6bxo0Oeg6o&t=130s,1), (https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/,1), (install-kubeadm/,1), ("",1), (Installing,1), (a,1), (container,1), (runtime,1), (Install,1), (Docker,1), (Engine,1), (on,1), (Ubuntu,1), (=============,1), (1.Set,1), (up,1), (Docker's,1), (apt,1), (repository.,1), ("",1), (#,1), (Add,1), (Docker's,1), (official,1), (GPG,1), (key:,1), (sudo,1), (apt-get,1), (update,1), (sudo,1), (apt-get,1), (install,1), (ca-certificates,1), (curl,1), (gnupg,1), (sudo,1), (install,1), (-m,1), (0755,1), (-d,1), (/etc/apt/keyrings,1), (curl,1), (-fsSL,1), (https://download.docker.com/linux/ubuntu/gpg,1), (|,1), (sudo,1), (gpg,1), (--dearmor,1), (-o,1), (/etc/apt/ke...

scala> val reducef = mapf.reduceByKey(_+_);
reducef: org.apache.spark.rdd.RDD[(String, Int)] = ShuffledRDD[6] at reduceByKey at <console>:23

scala> reducef.collect
res4: Array[(String, Int)] = Array((package,4), (index,1), (cluster.,1), (kube-scheduler-k8smaster-vm,2), ("$(.,1), (-e,1), (/',1), (/etc/kubernetes/admin.conf,1), (/etc/os-release,1), (This,1), (repository.,1), ([signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg],1), (RESTARTS,2), (kube-flannel,1), (kube-apiserver-k8smaster-vm,2), (daemon-reload,1), (export,2), (gpg,3), (already,1), (any,2), (go,1), (make,1), (network,1), (Download,2), (git,1), (control-plane,1), (4.,2), (packaging/systemd/*,1), (-o,3), (are,1), ("kubectl,1), (2.,2), (sha256:b058fc69cbec62d085bb38d84f0a89879cbe16068567f061a8fac84f87eab9aa,1), ([podnetwork].yaml",1), (https://download.docker.com/linux/ubuntu/gpg,1), (STATUS,2), (kubelet,3), (overwrites,1), (commands,1), (can,3), (tee,2), (...


				
			

Docker MongoDB – csv import

Ref: https://www.mongodb.com/developer/products/mongodb/mongoimport-guide/
				
					# Docker compose for mongodb.
# Create a volume to place the csv file in it

version: "2.0"
services:
  mongodb:
    image: mongo:4.4.2
    restart: always
    mem_limit: 512m
    volumes:
      - ./mongodata:/data/db
    ports:
      - "27019:27017"
    command: mongod
    # environment:
    #   - MONGO_INITDB_ROOT_USERNAME=root
    #   - MONGO_INITDB_ROOT_PASSWORD=password
    healthcheck:
      test: "mongo --eval 'db.stats().ok'"
      interval: 5s
      timeout: 2s
      retries: 60
				
			
				
					# Connect to mongodb docker container
docker exec -it 79e80b878fbc bash

root@ > mongoimport \
   --collection='fields_option' \
   --file=/data/db/events.csv \
   --type=csv \
   --fields="timestamp","visitorid","event","itemid","transactionid"
				
			
wp-mysql-ngnx

WordPress + MySQL + Nginx

WordPress using docker on Nginx without SSL

https://medium.com/swlh/wordpress-deployment-with-nginx-php-fpm-and-mariadb-using-docker-compose-55f59e5c1a

 

				
					// uploads.ini
file_uploads = On
memory_limit = 512M
upload_max_filesize = 256M
post_max_size = 256M
max_execution_time = 300
max_input_time = 1000
				
			
				
					// docker-compose.yml
version: '3'
services:
  mysql:
    image: mariadb
    volumes:
      - /data/mysql:/var/lib/mysql
    environment:
      MYSQL_ROOT_PASSWORD: mysql_root_pass
      MYSQL_DATABASE: db_name
      MYSQL_USER: user_name
      MYSQL_PASSWORD: user_pass
    restart: always
  wordpress:
    image: wordpress:php7.3-fpm-alpine
    volumes:
      - ./wordpress:/var/www/html
      - ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
    depends_on:
      - mysql
    environment:
      WORDPRESS_DB_HOST: mysql
      MYSQL_ROOT_PASSWORD: mysql_root_pass
      WORDPRESS_DB_NAME: db_name
      WORDPRESS_DB_USER: user_name
      WORDPRESS_DB_PASSWORD: user_pass
      WORDPRESS_TABLE_PREFIX: wp_
    links:
      - mysql
    restart: always
  nginx:
    image: nginx:alpine
    volumes:
      - ./nginx:/etc/nginx/conf.d
      - ./wordpress:/var/www/html
    ports:
      - 80:80
    links:
      - wordpress
				
			
				
					// nginx.conf
server {
  listen 80;
  listen [::]:80;
  access_log off;

  root /var/www/html;
  index index.php;
  server_name example.com;
  server_tokens off;

  location / {
    # First attempt to serve request as file, then
    # as directory, then fall back to displaying a 404.
    try_files $uri $uri/ /index.php?$args;
  }
  
  # pass the PHP scripts to FastCGI server listening on wordpress:9000
  location ~ \.php$ {
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass wordpress:9000;
    fastcgi_index index.php;
    include fastcgi_params;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param SCRIPT_NAME $fastcgi_script_name;
  }
}

				
			
				
					CONTAINER ID   IMAGE                         COMMAND                  CREATED          STATUS          PORTS                               NAMES
0a9f6944090a   nginx:alpine                  "/docker-entrypoint.…"   49 minutes ago   Up 49 minutes   0.0.0.0:80->80/tcp, :::80->80/tcp   wordpress_nginx-nginx-1
0a8ae0a6bcc6   wordpress:php7.3-fpm-alpine   "docker-entrypoint.s…"   51 minutes ago   Up 49 minutes   9000/tcp                            wordpress_nginx-wordpress-1
d264ceb8fb32   mariadb                       "docker-entrypoint.s…"   51 minutes ago   Up 49 minutes   3306/tcp                            wordpress_nginx-mysql-1

				
			
Docker-Wordpress-Nginx

WordPress + Docker + Nginx + SSL

Start wordpres on Nginx under the Docker container

 

 

				
					In case of easy steps to install SSL certificate, you ZeroSSL and create new certificate.
It is valid for 90 days for free.
				
			
				
					// Create the SSL certificate 
// https://mpolinowski.github.io/docs/DevOps/NGINX/2020-08-27--nginx-docker-ssl-certs-self-signed/2020-08-27/#creating-the-ssl-certificate

sudo openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout /opt/docker-ingress/configuration/ssl/nginx-selfsigned.key -out /opt/docker-ingress/configuration/ssl/nginx-selfsigned.crt
				
			
				
					// uploads.ini
file_uploads = On
memory_limit = 512M
upload_max_filesize = 256M
post_max_size = 256M
max_execution_time = 300
max_input_time = 1000

				
			
				
					// .env
MYSQL_ROOT_PASSWORD=wordpress_root
MYSQL_USER=wordpress
MYSQL_PASSWORD=wordpress

				
			
				
					// docker-compose.yml
version: '3'

services:
  db:
    image: mysql:8.0
    container_name: db
    restart: unless-stopped
    env_file: .env
    environment:
      - MYSQL_DATABASE=wordpress
    volumes: 
      - mysql_data:/var/lib/mysql
    command: '--default-authentication-plugin=mysql_native_password'
    networks:
      - app-network

  wordpress:
    depends_on: 
      - db
    image: wordpress:6.2.0-fpm-alpine
    container_name: wordpress
    restart: unless-stopped
    env_file: .env
    environment:
      - WORDPRESS_DB_HOST=db:3306
      - WORDPRESS_DB_USER=$MYSQL_USER
      - WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
      - WORDPRESS_DB_NAME=wordpress
    volumes:
      - ./wordpress_data:/var/www/html
      - ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
    networks:
      - app-network

  webserver:
    depends_on:
      - wordpress
    image: nginx:stable-alpine
    container_name: webserver
    restart: unless-stopped
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./configuration/conf.d:/etc/nginx/conf.d
      - ./configuration/ssl:/etc/nginx/ssl
      - ./configuration/conf.d/nginx.conf:/etc/nginx/conf.d/nginx.conf
      - ./wordpress_data:/var/www/html
    networks:
      - app-network

volumes:
  mysql_data:

networks:
  app-network: {}
  #    driver: bridge

				
			
				
					// nginx.conf
server {
    listen      443 ssl;
    listen      [::]:443 ssl;

    ssl_certificate /etc/nginx/ssl/certificate.crt;
    ssl_certificate_key /etc/nginx/ssl/private.key;
    include     /etc/nginx/ssl/ssl-params.conf;

    server_name ibliv.com www.ibliv.com;

    index index.php index.html index.htm;

    root /var/www/html;

    location ~ /.well-known/acme-challenge {
        allow all;
        root /var/www/html;
    }

    location / {
        try_files $uri $uri/ /index.php$is_args$args;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass wordpress:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }

    location ~ /\.ht {
        deny all;
    }
    
    location = /favicon.ico { 
        log_not_found off; access_log off; 
    }
    location = /robots.txt { 
        log_not_found off; access_log off; allow all; 
    }
    location ~* \.(css|gif|ico|jpeg|jpg|js|png)$ {
        expires max;
        log_not_found off;
    }
}
server {
    listen 80;
    listen [::]:80;

    server_name ibliv.com www.ibliv.com;

    return 301 https://$server_name$request_uri;
}

				
			
				
					# Start the docker
docker compose up -d

# List the containers
docker ps
CONTAINER ID   IMAGE                        COMMAND                  CREATED          STATUS          PORTS                               NAMES
82bfa6789358   nginx:latest                 "/docker-entrypoint.…"   9 minutes ago    Up 9 minutes    0.0.0.0:80->80/tcp, :::80->80/tcp   webserver
4bcf3e01fb98   wordpress:6.2.0-fpm-alpine   "docker-entrypoint.s…"   9 minutes ago    Up 9 minutes    9000/tcp                            wordpress
3cd895c69c4f   mysql:8.0                    "docker-entrypoint.s…"   47 minutes ago   Up 47 minutes   3306/tcp, 33060/tcp                 db

				
			

Docker Commands

MySQL docker

				
					> docker run --name docker-mysql -v /var/lib/mysql-lively-data:/var/lib/mysql -e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=oplink456 -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d -p:3306:3306 --net livelyhealth --restart always mysql:8 --default-authentication-plugin=mysql_native_password

> docker run --name docker-mysql -v /var/lib/mysql-lively-data:/var/lib/mysql -e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d -p:3306:3306 --restart always mysql:8 --default-authentication-plugin=mysql_native_password

> docker run --name docker-durgesh -v /var/lib/mysql-durgesh-data:/var/lib/mysql -e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=secret -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d -p:3307:3306 --restart always mysql:8 --default-authentication-plugin=mysql_native_password
				
			

Mongo docker

				
					> docker run --name mongodb0 -v /data/mongodb0:/data/db -p 27017:27017 -e MONGO_INITDB_ROOT_USERNAME=root --privileged=true -e MONGO_INITDB_ROOT_PASSWORD=root -d mongo --auth
> docker exec -it b654ee788e95 bash
> mongodb://root:root@127.0.0.1:27017/?authSource=admin
				
			

Postgres docker

				
					> docker run -d -p 5432:5432  \
      --restart always \
      --name some-postgres \
      -e POSTGRES_PASSWORD=mysecretpassword \
      -e PGDATA=/var/lib/postgresql/data/pgdata \
      -v ~/AWSMO/postgres-data:/var/lib/postgresql/data \
      postgres
				
			
				
					> docker run -d -p 5432:5432  \
      --restart always \
      --name some-postgres \
      -e POSTGRES_PASSWORD=mysecretpassword \
      -e PGDATA=/var/lib/postgresql/data/pgdata \
      -v /home/soundarya/temp/db-data/postgres/data1:/var/lib/postgresql/data \
      postgres
				
			

Remove all images with tag none

				
					docker rmi $(docker images --filter "dangling=true" -q --no-trunc)
				
			

Remove all exited containers docker

				
					docker rm $(docker ps -a -f status=exited -q)
docker rm $(docker ps -a -q)
				
			

Find all images using wildcharacter search

				
					docker images --filter=reference='*hands*/*:*latest*'

REPOSITORY                           TAG       IMAGE ID       CREATED          SIZE
hands-on/product-composite-service   latest    319db4ad7810   15 minutes ago   337MB
hands-on/gateway                     latest    7dc5a3abb416   15 minutes ago   328MB
hands-on/config-server               latest    db7c7bceb7bc   15 minutes ago   309MB
				
			

Save docker images on other server using wild character

				
					docker images --filter=reference='*hands*/*:*latest*' | awk '{print $1 " " $2 " " $3 }' | while read REPOSITORY TAG IMAGE_ID
do
  echo "== Save image: $REPOSITORY"
  docker save "$REPOSITORY" | ssh -i ~/.ssh/multipass-k8s -C ubuntu@10.131.187.176 docker load
done
				
			
				
					## RabbitMQ docker

# --------------------

docker run --rm -it -d --hostname my-rabbit -p 15672:15672 -p 5672:5672 rabbitmq:3-management

http://localhost:15672/

credentials guest:guest

https://codeburst.io/get-started-with-rabbitmq-on-docker-4428d7f6e46b







## connect mysql in terminal

docker exec -it bcf53fb14786 bash

mysql --protocol=tcp -u root -p -P 3306

mysql -uroot -p -h 127.0.0.1

mysql -u root -p

 

GRANT ALL PRIVILEGES ON *.* TO 'user_name'@'%' IDENTIFIED BY "admin";

FLUSH PRIVILEGES;

 

## run docker local container

# -----------------------------------------------------------

#   Connect MySQL and Spring with separate docker containers

# -----------------------------------------------------------

 

# 1) build docker for backend

docker build -t longevity-backend .

 

# 2) create network to be used

docker network create livelyhealth

 

# 3) start mysql with network name

docker run --name docker-mysql -v /var/lib/mysql-lively-data:/var/lib/mysql -e MYSQL_ROOT_HOST=% -e MYSQL_ROOT_PASSWORD=oplink456 -e MYSQL_ALLOW_EMPTY_PASSWORD=yes -d -p:3306:3306 --net livelyhealth --restart always mysql:8 --default-authentication-plugin=mysql_native_password

 

# 4) start backend with network name

docker run -d -p 8080:8080 -t --net livelyhealth longevity-backend

 

# -----------------------------------------------------------

# E N D

# -----------------------------------------------------------







# -------- application.properties --------------

spring.datasource.url=jdbc:mysql://docker-mysq

l:3306/longevity

# -----------------------------------------------------------

docker network ls

docker network inspect livelyhealth

 

# once mysql docker starts, change the user/pwd

 

mysql -u homestead -p -h 127.0.0.1

 

ALTER USER 'root'@'%' IDENTIFIED WITH mysql_native_password BY 'oplink456';

ALTER USER 'homestead'@'%' IDENTIFIED WITH mysql_native_password BY 'secret';

FLUSH PRIVILEGES;

 

# use in compass

mongodb://admin:password@localhost:27017/tutorialmern?readPreference=primary&appname=MongoDB%20Compass&ssl=false

 

# use in code

mongodb://admin:password@localhost:27017/tutorialmern?authSource=admin







## if error with client comes while running

# ==============================

https://stackoverflow.com/questions/55763428/react-native-error-enospc-system-limit-for-number-of-file-watchers-reached

 

# insert the new value into the system config

echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p

 

# check that the new value was applied

cat /proc/sys/fs/inotify/max_user_watches

 

# config variable name (not runnable)

fs.inotify.max_user_watches=524288

# ==============================

 

# kill process of any port

kill $(lsof -t -i:3001)

sudo lsof -i -P -n | grep 3306

kill -9 $(lsof -t -i:3306)

 

# get folder size

du -h --max-depth=1







mongodb+srv://soundarya:<password>@cluster0.ny9zr.mongodb.net/myFirstDatabase?retryWrites=true&w=majority

 

DATABASE='mongodb+srv://soundarya:droisys@cluster0.ny9zr.mongodb.net/mernstack?retryWrites=true&w=majority';

DATABASE="mongodb://admin:password@localhost:27017/mernstack?authSource=admin"

XuH4tfjE6ZjZXC_xoG48wGaQlU1vcLGWk







# Shortcuts

ln -s /media/soundarya/h_drive/hdrive_data/ ~/hdrive_data

ln -s ~/hdrive_data/MERN/projects/mern_thapa_local/docker_info.md ~/Desktop/desktop_info.md

ln -s ~/hdrive_data/spring ~/spring

ln -s ~/hdrive_data/spring-durgesh ~/spring-learn

ln -s ~/hdrive_data/MERN ~/mern

ln -s ~/hdrive_data/Freedom_financial ~/ff

ln -s ~/hdrive_data/NodeJS ~/NodeJS

 

#encfs

encfs /mnt/h_drive/hdrive_data/temp/J1A /mnt/h_drive/hdrive_data/temp/J2 

fusermount -u /mnt/h_drive/hdrive_data/temp/J2 

sudo umount -l /mnt/h_drive/hdrive_data/temp/J2

 

#compress file

tar cfz bin.tgz bin

 

#uncompress tar.gz file

tar -xvzf file.tar.gz

 

#search

grep -rnw '/path/to/somewhere/' -e 'pattern'

 

#Timestream databases

https://www.g2.com/categories/time-series-databases

 

#InfluxDB

https://hub.docker.com/_/influxdb

 

$ docker run -p 8086:8086 \

      -v $PWD/data:/var/lib/influxdb2 \

      -v $PWD/config:/etc/influxdb2 \

      -v $PWD/scripts:/docker-entrypoint-initdb.d \

      -e DOCKER_INFLUXDB_INIT_MODE=setup \

      -e DOCKER_INFLUXDB_INIT_USERNAME=my-user \

      -e DOCKER_INFLUXDB_INIT_PASSWORD=my-password \

      -e DOCKER_INFLUXDB_INIT_ORG=my-org \

      -e DOCKER_INFLUXDB_INIT_BUCKET=my-bucket \

      -e V1_DB_NAME=v1-db \

      -e V1_RP_NAME=v1-rp \

      -e V1_AUTH_USERNAME=v1-user \

      -e V1_AUTH_PASSWORD=v1-password \

      influxdb:2.0

      

$ docker run -d -p 8086:8086 \

      --restart always \

      -v ~/AWSMO/InfluxDB-data/data:/var/lib/influxdb2 \

      -v ~/AWSMO/InfluxDB-data/config:/etc/influxdb2 \

      -v ~/AWSMO/InfluxDB-data/scripts:/docker-entrypoint-initdb.d \

      -e DOCKER_INFLUXDB_INIT_MODE=setup \

      -e DOCKER_INFLUXDB_INIT_USERNAME=my-user \

      -e DOCKER_INFLUXDB_INIT_PASSWORD=my-password \

      -e DOCKER_INFLUXDB_INIT_ORG=my-org \

      -e DOCKER_INFLUXDB_INIT_BUCKET=my-bucket \

      -e V1_DB_NAME=v1-db \

      -e V1_RP_NAME=v1-rp \

      -e V1_AUTH_USERNAME=v1-user \

      -e V1_AUTH_PASSWORD=v1-password \

      influxdb:2.1.1







influx config create --config-name influx-config \

  --host-url http://localhost:8086 \

  --org awsmo \

  --token PJ7zd8rw0ai7rOMnTUil_kIz2wlcOzVH9_qD4p_LYxehQS6Uyp6mJ3X9QQUjNAEIigvsMSdyKlvrd5lOLtANRA== \

  --active







3.216.123.152

postgres/awsmo@123

db: awsmodb







====

https://www.tutorialspoint.com/spring_boot/spring_boot_database_handling.htm

Restart Postgres server

----------------------------------

Postgres live

3.216.123.152

 

> cd /usr/pgsql-14/bin

> sudo su postgres

> bash-4.2$ ./pg_ctl restart -D /var/lib/pgsql/14/data

 

Find number of open connections

----------------------------------

> ps ax | grep post | wc -l

 

influx -username admin -password 'awsmo@12345'

 

influx config \

  --host-url http://localhost:8086 \

  --org my-org \

  --token z5XiyReADjf0W915EzfZVGlkGXwhdZjJjOCGVeFjr7iZZ8HTVXK15Z7zA5oySQdn303k6a-uozf2rP67GAo60g== \

  --active







influx query 'from(bucket:"my-bucket") |> range(start:-30d)'

 

https://techviewleo.com/how-to-install-influxdb-on-amazon-linux/

CREATE USER admin WITH PASSWORD 'awsmo@12345' WITH ALL PRIVILEGES

GRANT ALL PRIVILEGES TO admin

 

curl -G "http://localhost:8086/query?u=admin&p=AWSMO@Password" --data-urlencode "q=SHOW DATABASES"







#QuestDB

https://questdb.io/docs/get-started/docker/

 

docker run -p 9000:9000 \

 -p 9009:9009 \

 -p 8812:8812 \

 -p 9003:9003 \

 questdb/questdb







#open elementary code

io.elementary.code







Influxdb history:

================

Date: 22 Feb 2022

Exported Data: 14000 records, excel size: 3.8 mb

Influxdb Full Backup size: 1.3G

 
				
			

Git commands

				
					# create git repo

git init --bare ~/mern/project.git

 

# go to project, initialize the git

git init .

 

# add git remote to local repo

git remote add origin ~/mern/project.git

 

# complete the remaining git commands

git add .

git commit -m "msg"

git push

 

# connect git on local network

git remote add origin ssh://soundarya@192.168.1.6:/home/soundarya/spring-durgesh/Springboot-LSF.git

git branch --set-upstream-to=origin/master