How to create a Media Server out of a router

How to create a Media Server out of a router

Hello folks. I’m here with yet another tutorial. This time, we are going to create a media server out of a router. Sounds cool, ain’t it? Let’s do it then.

Before proceeding, I want you to go through the prerequisites for this tutorial. First of all, your router should have OpenWrt installed on your router. You can install it by following links like this. Secondly, your router should have a USB port. We will use this port to connect the mass storage device (in our case, a 16GB flash drive).  The router I’m using is TP-LINK 1043ND. The version of OpenWrt I’m using is Chaos Calmer 15.05. Let’s start!

router_media_server (copy)


Assuming you are connected to your router through WLAN or LAN, ssh into the router console.

$ ssh root@

We need to install necessary packages first. Run following command in the router’s console.

# opkg update

# opkg install kmod-usb-core kmod-usb2 kmod-usb-storage kmod-usb-storage-extras block-mount kmod-usb-uhci kmod-usb-ohci kmod-fs-btrfs kmod-nls-cp437 kmod-nls-iso8859-1 block-mount luci-app-samba luci-i18n-samba-en samba36-server minidlna luci-app-minidlna

Notice package kmod-fs-btrfs. It is used to recognize btrfs filesystem i.e. if your USB storage is formatted with that filesystem. If it is formatted with a FAT filesystem, use kmod-fs-exfat package.

The above command will install packages that will allow us to mount the USB storage and run a Samba server on top of it. minidlna package will allow us to create a media server on top of the mounted USB storage.

We need to tell the router to auto mount the USB storage so that we do not need to mount the USB storage manually each time the router boots up. Run following commands:

# mkdir /mnt/sda1

# block detect > /etc/config/fstab/etc/init.d/fstab enable

Plugin the USB drive and reboot the router. Run following command and you will see your USB drive mounted.

# df -h

We will now configure Samba server with/without authentication. Note that we will not use any authentication for our media server in this tutorial.

I. Without authentication: We have to edit /etc/config/samba to tell samba where are the share points in the system. This file allows us to share file system as well as the mount point.

# vi /etc/config/samba

My config file looks like below:

config samba
option name 'OpenWrt'
option workgroup 'WORKGROUP'
option description 'OpenWrt'
option homes '1'

config sambashare
option name 'sda1'
option path '/mnt/sda1'
option read_only 'no'
option create_mask '0766'
option dir_mask '0766'
option guest_ok 'no'
option users 'nobody'

config sambashare
option name 'root'
option path '/'
option read_only 'no'
option guest_ok 'no'
option create_mask '0766'
option dir_mask '0766'

Now we will edit /etc/samba/smb.conf. What is smb.conf?

My smb.conf file looks like this:

netbios name = OpenWrt
display charset = UTF-8
interfaces = lo br-lan
server string = OpenWrt
unix charset = UTF-8
workgroup = WORKGROUP
browseable = yes
deadtime = 30
domain master = yes
encrypt passwords = true
enable core files = no
guest account = nobody
guest ok = yes
invalid users = root
local master = yes
load printers = no
map to guest = Bad User
max protocol = SMB2
min receivefile size = 16384
null passwords = yes
obey pam restrictions = yes
os level = 20
passdb backend = smbpasswd
preferred master = yes
printable = no
security = user
smb encrypt = disabled
smb passwd file = /etc/samba/smbpasswd
syslog = 2
use sendfile = yes
writeable = yes

comment = Home Directories
browsable = no
read only = no
create mode = 0750

path = /mnt/sda1
valid users = nobody
read only = no
guest ok = yes
create mask = 0766
directory mask = 0766

path = /
read only = no
guest ok = no
create mask = 0766
directory mask = 0766

Notice the line guest ok = yes under [sda1]. It allows a guest user to log in without any authentication.

II. With authentication: No points for guessing what changes we  need to make to provide an authentication based samba access. We have to smb.conf along with an extra step.

Change the line guest ok = yes to following under [sda1]:

guest ok = no

One more step and we’re done. OpenWrt has a user by name nobody. We will use this username to access the samba file system. Type following command to provide a password to nobody:

# smbpasswd -a nobody

Enter password and you’re done! Next time you access the samba file system, it will prompt you for username and password. Provide nobody as the username and password as the password you just entered.

Samba configuration is done. We will use a samba server without any authentication. So change the line in smb.conf under [sda1] to guest ok = yes and finally run the following command:

/etc/init.d/samba enable

Reboot the router.

Now we will configure minidlna so that our media server gets ready in no time.

For that, we will edit /etc/config/minidlna. Run following command:

# vi /etc/config/minidlna

A basic configuration should look like this:

config minidlna 'config'
option enabled '1'
option port '8200'
option interface 'br-lan'
option log_dir '/var/log'
option inotify '1'
option notify_interval '900'
option serial '12345678'
option model_number '1'
option root_container '.'
option album_art_names 'Cover.jpg/cover.jpg/AlbumArtSmall.jpg/albumartsmall.jpg/AlbumArt.jpg/albumart.jpg/Album.jpg/album.jpg/Folder.jpg/folder.jpg/Thumb.jpg/thumb.jpg'
option friendly_name 'Super Media Server'
option db_dir '/mnt/sda1/db'
option presentation_url ''
list media_dir 'A,/mnt/sda1/audio'
list media_dir 'V,/mnt/sda1/video'

Notice the last two lines. They are prefixed by A, and V, telling the minidlna utility where to find audio and video files respectively. You can also add a picture directory by prefixing the directory path by P,. e.g. P,/mnt/sda1/picture.

You’re almost done. Run following commands:

# /etc/init.d/minidlna enable
# /etc/init.d/minidlna start
# /usr/bin/minidlna -f /tmp/minidlna.conf  -d -R

And you’re done. Just reboot the router and use your own media server to stream audio and video files.

To see the media server in action.

I. On Ubuntu:

Open your file system. And click on Browse Network. You would be able to see the name of the workgroup (in our case it is WORKGROUP) under which you would be able to find the content of your USB storage device. An example is shown in the following screenshot following screenshot:

Screenshot from 2016-04-05 20:53:55

II. On Windows:

Click on Network and you would be able to see the friendly name of your media server. See the below screenshot.


III. On Android: Yes you can also stream your content on an Android phone. Download and install an app called BubbleUPnP from here. Read instructions on how to access your media server on the Google Play page (or through the app) and see your content stream on your smart phone.

That’s it for the tutorial. I’m experimenting with OpenVPN on OpenWrt so that anyone on the internet can access my media server. Will update on this blog post as soon as I manage to make it running. Thanks for your patience. Let me know if you’re stuck or have any doubts/questions.

Happy Hacking!




How I managed to deploy a 2 node ceph cluster

As a part of a course called Data Storage Technology and Networks in BITS Pilani – Hyderabad Campus, I took up project to integrate Ceph Storage Cluster with OpenStack. To integrate both of them, we first need to deploy Ceph Storage Cluster on more than 1 machine (we will use 2 machines for the purpose). This blog post will give you exact steps on how to do that.

Before starting, let me tell you that deploying Ceph Cluster on 2 nodes is just for learning purpose. In production environment, there are tens of machines, if not hundreds, to serve as nodes for the storage cluster.

Below is the schematic diagram that gives a basic idea of what we are trying to achieve.


ceph1: This node would become the admin node, the monitor node and would also serve as one of the Object Storage Devices (OSD).

ceph2: This node would serve as an Object Storage Device.

Configurations of ceph1 and ceph2 are –  OSCentOS 7, RAM~2GB, HDD~150 GB.

Ceph version we are going to install is: v9.2.1 – Infernalis

Let’s get started.

Step 0: This step kind of ensures everything goes smooth from the network point of view.

In CentOS 7, the default configuration has the network interfaces off by default. By that, I mean the interfaces would not have an IP address assigned to it whenever you boot your machine. Let’s make the IP static and make it on by default.

Run following command (on both ceph1 and ceph2) to know the interface name of your machine through which network operations would be carried out:

$ ifconfig -a

Let’s say we have got {iface} as the interface name. Now switch to root user and do the following:

# cd /etc/sysconfig/network-scripts

Locate ifcfg-{iface} in the directory and run following:

# nano ifcfg-{iface}

Locate attributes BOOTPROTO and ONBOOT. Edit them in the following way:



Reboot the machines and ensure you get an IP address after rebooting.

Now we will edit hostnames of each machine and let each machine know which host to find on which IP address. We will assign ‘ceph1’ as the hostname for ceph1 and ‘ceph2’ as the hostname for ceph2. Let us assume the IP address of ceph1 is and that of ceph2 is

On each ceph node, login as root user and run following command:

# nano /etc/sysconfig/network

Add following lines if not present(on ceph1):


and on ceph 2:


Save changes and exit.

Now we will tell ceph1 on which IP address ceph2 is located and also vice-versa. As a root user, run following:

# nano /etc/hosts

Add following lines to the file (on both ceph1 and ceph2):            ceph1            ceph2

Save the file and exit.

Now we will use hostname program to change the hostname that is currently set.

# hostname ceph1

and on ceph2:

# hostname ceph2

Now run following command to make the changes come in effect on both the machines:

# service network restart

Now the final thing we need to take care of. We will disable and stop the firewall service on each of the nodes for smooth operation. I ran into errors when I did not stop the service. You can find more about the error in this mailing list.

Let’s disable the service:

# systemctl disable firewalld

# systemctl stop firewalld

That’s it. Now we can move to ceph deployment.

Step 1: Adding ceph-deploy repositories on the admin node. Run following commands:

$ sudo yum install -y yum-utils && sudo yum-config-manager --add-repo && sudo yum install --nogpgcheck -y epel-release && sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-7 && sudo rm /etc/yum.repos.d/*

You might run into errors regarding GPG key. Solution is mentioned in this blog post.

Step 2: Now we are going to add the ceph yum repo. Use the file path /etc/yum.repos.d/ceph-deploy.repo.

$ sudo nano /etc/yum.repos.d/ceph-deploy.repo

Add following lines in the file:

name=Ceph noarch packages

Step 3: Updating repositories and installing ceph-deploy package.

$ sudo yum update && sudo yum install ceph-deploy

Step 4: The admin node must be have password-less SSH access to Ceph nodes. When ceph-deploy logs in to a Ceph node as a user, that particular user must have passwordless sudo privileges.

Let’s install NTP on each ceph1 and ceph2:

$ sudo yum install ntp ntpdate ntp-doc

Now we will install SSH on each of ceph1 and ceph2:

$ sudo yum install openssh-server

Ensure SSH is running on the nodes:

$ ps aux | grep sshd

To make the process of SSHing the ceph nodes easy, we will edit SSH config file.

$ nano ~/.ssh/config

Add following lines if not present

Host ceph2
Hostname ceph2
User ceph

Step 5: This is an important step. We will now create a new user on each Ceph node.

The ceph-deploy utility must login to a Ceph node as a user that has passwordless sudo privileges, because it needs to install software and configuration files without prompting for passwords.

Let’s create a new user on each ceph1 and ceph2. Run following commands on each of them.

$ sudo useradd -d /home/ceph -m ceph
$ sudo passwd ceph

Adding the new user to the sudoers list.

$ echo "ceph ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/ceph
$ sudo chmod 0440 /etc/sudoers.d/ceph

Now enabling password-less SSH. Since ceph-deploy would not prompt for a password, we must generate SSH keys on the admin node (ceph1) and distribute the public key to each Ceph node (here ceph2).

Run following command to generate SSH keys.

$ ssh-keygen

When asked for passphrase, leave it blank.

Now we would copy the key to ceph2 by running following command:

$ ssh-copy-id ceph@ceph2

Step 6: Now we are going to install priorities packages. What are priorities packages?

Run following command:

$ sudo yum install yum-plugin-priorities

Great! You just completed the first and the most important half of this tutorial. You can take a break now if you want to. 🙂

Now moving to the second half.

Step 7: On admin node, i.e. ceph1, do following:

$ cd /home/ceph/

$ mkdir my_cluster

$ cd my_cluster

Disable requiretty, as while installing ceph-deploy, you might encounter errors. To do that run:

$ sudo sed -i s'/Defaults    requiretty/#Defaults    requiretty'/g /etc/sudoers

Step 8: Let’s create a cluster now.

Ensure that we are on ceph1 and also that you are under /home/ceph/my_cluster directory. Now execute following commands

$ ceph-deploy new ceph1

This command will create 3 files. One of them is ceph.conf. Let’s edit the file to let ceph know that we will have a setup of 2 OSDs.

$ nano ceph.conf

Under [global] section append:

osd pool default size = 2

Step 9:

Let’s install ceph on all the nodes.

$ ceph-deploy install ceph1 ceph2

Let’s add initial monitor and gather the keys:

$ ceph-deploy mon create-initial

This will generate below mentioned files:


Step 10: Now we will add 2 OSDs. In a production environment, there are special disks assigned as OSDs typically. For our basic setup, we will use directories rather than whole disks.

On ceph1, execute following command:

$ cd /home/ceph

$ mkdir osd0

On ceph2, execute following command (you can also use ssh utility):

$ cd /home/ceph

$ mkdir osd1

Let’s prepare OSDs by running following commands on ceph1:

$ cd /home/ceph/my_cluster

$ ceph-deploy osd prepare ceph1:/home/ceph/osd0 ceph2:/home/ceph/osd1

Now let’s activate the prepared OSDs. Run following command on ceph1:

$ ceph-deploy osd activate ceph1:/home/ceph/osd0 ceph2:/home/ceph/osd1

Please note that, preparing and activating may fail if the firewall service is not disabled. If you encounter error similar to the one mentioned below, try disabling and stopping the firewall service and try again.

[ceph1][WARNIN] No data was received after 300 seconds, disconnecting...

After successfully activating the OSDs, reboot the machines. It is very important that you reboot the machines.

Step 11: After rebooting, let’s check if the OSDs are running. Run following command on ceph1:

$ ceph osd tree

Typical output of the above command will enlist osd.0 and osd.1. If not, then you have to check again if firewall service is running. If so, disable and stop it, and activate OSDs again (don’t prepare the OSDs again; they need to prepared only once).

If you see all the OSDs in the output, then that means we have deployed the cluster successfully!

Step 12: Now we will use ceph-deploy to copy the configuration file and admin key to our admin node and our Ceph Nodes so that we can use the ceph CLI without having to specify the monitor address and ceph.client.admin.keyring each time we execute a command.

$ sudo chmod +r /etc/ceph/ceph.client.admin.keyring

$ ceph-deploy admin ceph1 ceph2

First command ensures we have correct permission for ceph.client.admin.keyring.

After this step, make sure the machines are rebooted.

Step 13: Final step. Now we will see the health of our cluster by running following command on ceph1:

$ ceph health

The desired output is:


Now we will see the status of our cluster. Run the following command on ceph1:

$ ceph status

The output should be similar to the output I got on my machine, which is depicted below:

cluster 9cb53496-a559-401a-a16f-cc3a3df8c1c4
health HEALTH_OK
monmap e1: 1 mons at {ceph1=}
election epoch 1, quorum 0 ceph1
osdmap e12: 2 osds: 2 up, 2 in
flags sortbitwise
pgmap v1415: 64 pgs, 1 pools, 0 bytes data, 0 objects
28708 MB used, 115 GB / 143 GB avail
64 active+clean

The output should have active+clean status.

If you have reached here that means you have completed the tutorial. Bravo!

Few tips:

The errors I encountered during the installation and deployment were mostly due to lack of permission to ceph user to access and/or modify files. So make sure that you have got enough permission.

If you are unsure what permissions to give, run following commands(you must have root privileges):

$ sudo chmod 664 /path/to/file

$ sudo chmod -R 664 /path/to/directory

Note that, this is for the test environments only, like a lab. Not for production environment. Do ask your supervisor/boss/anybody-above-you before making significant changes.

Hope I helped. Have fun. Happy hacking!





Import GPG key in CentOS 7

I was trying to deploy a ceph cluster on CentOS 7 machine and while following the steps mentioned on this page, I ran into following error:

You have enabled checking of packages via GPG keys. This is a good thing.
However, you do not have any GPG public keys installed. You need to download
the keys for packages you wish to install and install them.
You can do that by running the command:
rpm --import public.gpg.key
Alternatively you can specify the url to the key you would like to use
for a repository in the 'gpgkey' option in a repository section and yum
will install it for you.

For more information contact your distribution or package provider.

Problem repository: dl.fedoraproject.org_pub_epel_7_x86_64_

Solution to this problem is to run the following command. Please note that you must have root privileges to do so.

# rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7

And voila! Everything was fine. Hope I helped.