Docker Series Part 1: Simple Introduction

Containers have existed long before Docker came into the picture.  Let us see what is LXC(Linux Containers)

Linux users easily create and manage system or application containers. The goal of LXC is to create an environment as close as possible to a standard Linux installation but without the need for a separate kernel.

This is perfect for isolating applications and containerizing them; this is useful as the other processes can’t access resources in any other container. Let’s see what the Docker team says,

Docker containers wrap a piece of software in a complete filesystem that contains everything needed to run: code, runtime, system tools, system libraries – anything that can be installed on a server.

In much simpler terms, they provide a Virtual Machine like isolated environment with a few other benefits. But while Docker is more of application container, LXC is more of a OS container.  I suggest this article for an introduction to LXC.

Let’s get started with using Docker. First install Docker Engine following the instructions here.

Let’s run our first container.

docker run -it ubuntu

This will put you in a Ubuntu container. You will have the same experience whether you are on Linux, Mac OS or Windows. You have to keep in mind that Docker containers are not VMs. Containers are ephemeral.

If you install any software or create/modify any files, you will have to create a new image for these changes to persist. For this, you will have to commit your current changes in a new Docker image.

➜  ~ docker ps -a
CONTAINER ID        IMAGE                 COMMAND                  CREATED             STATUS                   PORTS               NAMES
89d5116abd2e        alpine                "sh"                     5 seconds ago       Exited (0) 2 seconds ago                       vigilant_kilby
➜  ~ docker commit 89d5116abd2e mynewimage

The docker commit will take the old container’s ID and the name of the new image as arguments.

I guess this is enough to get started. You can check more pre-built public images at the Docker Hub or the Docker Store.

Git push to multiple remotes.

Ah, git. Always surprising.

I recently had the requirement to push to multiple remote URL at once. I found out that you could set multiple push URLs. The trick to doing that lies in the using

git remote set-url --add --push

So what I did was, create a new remote all, which seemed appropriate, using
git remote add all git@gitlab.com:mbtamuli/miterp.git

To add the two remotes, I had to do,
git remote set-url --add --push allgit@gitlab.com:mbtamuli/miterp.git
git remote set-url --add --push all ssh://mbtamuli@techfreak.ga:2212/~/miterp.git

Somehow, just adding(git remote set-url --add --push) the second remote after creating (git remote add) seems to replace it instead of adding it. So we have to add twice for the two URLs.

Docker Series

This will be a small introduction series to the world of Docker.
I’ll be going over the following topics:

  • Simpler usage of docker containers.
  • Data Volumes and mounting host directory in a container.
  • Building your own docker image.
  • Running multiple containers simultaneously(Docker Compose).
  • Docker cluster management and orchestration features(Docker Swarm).

    Disclaimer: Most of the things that I will be going through will be for beginners and will contain stuff from the official documentation of Docker.

My adventures with AWS

I had applied for the GitHub Student Developer Pack. I had $150 in credits for AWS. 😎

I decided to put this to good use. I have started exploring Amazon EC2, Amazon Elastic BeanstalkAmazon EC2 Container Service. The aws-cli is really pretty awesome with the little use that I have done so far. As I go, I will keep updating this repo aws-scripts with the scripts I prepare. As this is just my testing with aws-cli scripts, these scripts will be hard coded and in general may not be usable anywhere else without a little tweaking.

I will keep blogging about my adventure, all my successes and failures as well.

Docker Registry on local network.

So, we all have used Docker Hub and it’s private repositories, but sometimes, if we want to do some testing on the local network and want to save some bandwidth, we could set up our own Docker Registry. If you want to read that documentation that’s fine. But this is to just get you running.

The machine on which you want to configure the registry(Let’s call it the server) must have a static IP. Add an entry to your /etc/hosts file (Linux or MAC). Then follow these steps (Run these on the server only.)

mkdir -p certs && openssl req \ -newkey rsa:4096 -nodes -sha256 -keyout certs/docker.key \ -x509 -days 365 -out certs/docker.crt

Then, run this command. (Obviously you have docker installed and configured for your user! You’re not that dumb. 😉 )

docker run -d -p 5000:5000 --restart=always --name registry \
-v `pwd`/data:/var/lib/registry \
-v `pwd`/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/docker.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/docker.key \
registry:2

You are almost done! Just two more steps left to configure your docker engine to talk to this registry. (These must be done on your local machines)

  1. Copy the docker.crt file to /etc/docker/certs.d/docker.local:5000/ca.crt
  2. Restart Docker Engine. (systemctl restart docker)

Sources

  • https://docs.docker.com/registry/insecure/#/using-self-signed-certificates
  • https://docs.docker.com/registry/deploying/#get-a-certificate
  • https://docs.docker.com/registry/deploying/#storage

Happy New Year

With all the not so great things happening in 2016, some people made a movie. Let’s all make some choices to start stopping the mindless actions on social networking sites(that like and share on the Syrian kid’s picture won’t do the kid any good.) and do some meaningful actions. Let’s make the earth a livable place(reducing pollution, corruption).

We can stop depending on anyone else to help us and rescue ourselves from our problems. Be your own hero.

Carpe diem and Happy New Year.

🙂

 

My Download Box

What is this?

This is not supposed to be a tutorial and a basic knowledge about Linux is assumed.

If I have this movie I want to watch when I get home, I will have to wait till I am home then download the movie and then watch it. With the kind of internet I have, the download may take around 1-2 hours. I am an impatient person and I can’t wait that long. So I took some bits and pieces that I knew and set up my personal download box which runs as long as internet and electricity is provided to it.
(This is not a dedicated server for a live site, so I don’t have a guaranteed uptime of 99.99% 😀 ).

How did I do it?

I had a Raspberry Pi 2 lying around that I wanted to put to good use. I got a Ethernet cable and connected it to my router and set it up. I was able to SSH into it (a good start). I installed aria2c on it.
To be able to download  a single file or a few files(locally, from my home) this is perfect.

aria2c -x10 "https://www.example.com/movie.mp4"
aria2c -x10 -i list_of_movies.txt

with list_of_movies.txt containing multiple URL each in its own separate line. Cool till now? This ain’t no rocket science. Then I came to the point where I wanted to be able to do this remotely, maybe I’m out with my friends or somewhere and I decide I want to watch this movie X.  I want to get a download link for X and put it on my download box to keep it downloaded for when I reach home. Now these commands become meaningless if I’m not able to SSH into my rPi and run these commands.

So I read up more on a topic I had known about, that is Reverse SSH tunneling. You can get the idea from here – How does reverse SSH tunneling work? I have a droplet running on DigitalOcean. I set up my key based authentication for SSH and set up a reverse SSH Tunnel using

ssh -f -N -T -R686868:localhost:22 username@yourpublichost.example.com

Now the problem was, although this reverse tunnel worked perfectly, it will definitely close down on power loss or maybe connectivity loss. I could put this up in cron or write a script which ran this command periodically. Instead I found something that does a pretty awesome job of maintaining a persistent reverse SSH tunnel.

Meet AutoSSH (The site was down when I checked so here’s another link – Internet Archive: AutoSSH). This is the perfect tool for my job. So I now set up AutoSSH using

autossh -M 10994 -q -N -o "PubkeyAuthentication=yes" -o "PasswordAuthentication=no" -o "ServerAliveInterval 60" -o "ServerAliveCountMax 3"  -R 6666:localhost:22 username@yourpublichost.example.com -p 2212 &

Now a persistent connection exists always between the rPi and my remote droplet. Now I can SSH anytime and add a download link to aria2c which I run in daemon mode.

aria2c --enable-rpc --rpc-listen-all --rpc-secret mysecret

Then to add a download link, I do

aria2rpc addUri "https://www.example.com/movie.mp4" -x10

For aria2rpc look at – aria2rpc. This is not perfect, maybe an Android app(which might exist) or a web interface would be nice. 🙂

P.S. – A web interface does exist, but I haven’t tinkered with it. WebUI Aria2

File Manager, Browser and Shell

This is the post 3 of 3 in the series: My Working Environment

Introduction

As you might have guessed from the last two posts in this series here and here, I love doing stuff the command line way. So this post will describe my file manager, browser, shell and it’s extensions.

zsh

I use zsh as my shell. I love the toppings we can add with zsh. Especially Oh My Zsh. Oh My Zsh has visual appeal as well as quite a lot of functionality. This is my zsh config file – zshrc

If you also install fortune, boxes, you will see random messages like these every time you invoke the shell.

 ______________________________________________
/\                                             \
\_| Excellent time to become a missing person. |
 |                                             |
 |  ___________________________________________|_
 \_/___________________________________________/
mbtamuli@techfreak ~ »

ranger

Ranger is the file manager I prefer using with i3. This is another command line based application. This is a fairly intuitive application for browsing around your file system using just the arrow keys. Obviously the arrow keys are not the only keys you can use. Also you can use commands like :mkdir dirname it also works.

Google Chrome

This is something I can’t stop using no matter what is said about Chrome being RAM hungry and other such stuff. I use extensions such as OneTab, AdBlock, Pocket, Pushbullet, and last but not the least Vimium.

Media Tools

This is the post 2 of 3 in the series: My Working Environment

Introduction

Okay, this post deals with tools I use to deal with my media files, namely, image, video and audio.

These are all very lightweight applications and run very fast. I will just just mention how to get started with these applications as lots of documentation as well as easy to read articles are present which explain their use better than I can explain. But if you have any doubts, you can still post a comment.

Feh

Starting with feh, use feh filename, to just view a file.

and to browse a directory,
feh -g 640x480 -d -S filename /path/to/directory

  • The -g flag forces the images to appear no larger than 640×480
  • The -d flag draws the file name
  • The -S filename flag sorts the images by file name

cmus

Now, with cmus, there are a lot of things you can do like with GUI music players like create libraries and playlist. I will mention some basic functionality.

enter(return key) – start playback
c – pause playback
v – stop playback
b – next track
z – previous track
s – toggle shuffle
x – restart track
i – jump view to the currently playing track (handy when in shuffle mode)
– reduce the volume by 10%
+ – increase the volume by 10%

mplayer

Well, I use this more than the other two tools. I also use this to play audio files sometimes. This is my config file for mplayer – config

For controlling the player,

Left and Right – Seek backward/forward 10 seconds.
Up and Down – Seek forward/backward 1 minute.
Page Up and Page Down – Seek forward/backward 10 minutes.
< and > – Go backward/forward in the playlist.
space – pause (pressing again unpauses).
q – stop playing and quit.
f – toggle fullscreen
v – toggle subtitle visibility.
x and z – adjust subtitle delay by +/- 0.1 seconds.

That’s it for this post. I’ll talk about the browser, file manager, shell and terminal I use next.

Ubuntu and i3

This is the post 1 of 3 in the series: My Working Environment

Introduction

I am using Ubuntu 16.04 desktop edition as my operating system. I have set up i3 as my Window Manager, and I must say, I am in love with i3. It is very easy to use and you can configure it to your heart’s joy!

There are some pretty cool stuff one can do.

To install i3 on Ubuntu, follow these steps – Install latest i3

Using i3

I find it very easy to use and it is light (on resources) and thus loads faster. You’ll appreciate this if you have a slow computer. As i3 is fully keyboard based, you’ll not find a conventional menu with icons. It does have a menu, dmenu, where you can type an executable’s name and it will launch it.
You can also specify applications to launch at startup in the i3 config file. You can find mine at – i3config

Here are some default keyboard shortcuts. In these, $mod means Windows key. It is something you can set, after installing i3 when you login the first time, it asks which key do you want to set as the MOD key, it gives the option of Alt or Super(Windows) key.

$mod+Return – Opens up the terminal
$mod+Shift+q – quit

Move Focus

$mod+Left – Focus window on the left
$mod+Down – Focus lower window
$mod+Up – Focus upper window
$mod+Right – Focus window on the left

Move Window

$mod+Shift+Left – Focus window on the left
$mod+Shift+Down – Focus lower window
$mod+Shift+Up – Focus upper window
$mod+Shift+Right – Focus window on the left

You can refer to more shortcuts here in this image – i3 reference

This is all for Ubuntu and i3. More posts to follow. Stay Tuned 🙂