Search This Blog

Tuesday, 18 December 2018

Block vs File vs Object Storage

Below Table summarizes the key differences between Block, File and Object Storage in a brief manner. Hope it will help you understand the concept !

Thursday, 20 September 2018

CosBench on Red Hat Linux

What is Cosbench

COSBench is a benchmarking tool to measure the performance of Cloud Object Storage services such as Amazon S3, Opestack Swift, DELL EMC ECS, Google Cloud Storage etc. Intel developed this tool.

Cosbench works on a Controller and Driver design. Controller VM will utilize the Drivers VMs to perform the IO on the Cloud Storage. So, there is can be only 1 VM that can be configured as Controller and Driver VM. If you want to test much more stringent tests, one controller VM and multiple drivers VMs can be utilized. 

The detailed procedure for installing and using Cosbench is available in the Download Package – “COSBenchUserGuide.pdf”

So, you must be thinking if everything is documented in User Guide, then what is the purpose of this document !!! Hold On !!!
“The User Guide does not cover the procedure for configuration on the RHEL OS, it only covers Ubuntu OS. Well! If you follow that procedure for RHEL, it will fail !!!”

OK ! Let’s begin:

What is required?

·         A VM/machine that is running RHEL with specifications defined in CosBench User Guide – “Reference Hardware Configuration”.
·         Ask your RHEL guy to make sure below packages are installed on the machine (very important):
o   Java Run Time Environment 1.6 or later
o   Curl 7.22.0 or later
o   NC (netcat) package
o   Unzip package
·         Most importantly, Download the latest CosBench Package from below link:

Configuration on the Controller

·         FTP the CosBench package on the RHEL Server, e.g. in the /cosbench.
·         Unzip
·         Ln –s 0.4.2/ cos                  -------------------------- Creates a symbolic link to unzipped files
·         Cd cos                  
·         Chmod +x .sh                       -------------------------- Sets the permissions to run scripts

Configuration Files

These filed will be under the cos folder.


This file will contain the name and url of all the driver VM’s. Multiple entries if you are using multiple VM’s. Entries goes like below:

log_level = INFO
log_file = log/system.log
archive_dir = archive

Note: If more driver VM’s, add entries and change the name and url


This one is very important for configuration on the RHEL servers. By default there is a line (TOOL_PARAMS="-i 0"). This value does not applies to RHEL OS.







TOOL_PARAMS="-i 0" -----Delete this line

So edit this file and delete this line.

Start the Controller

On the Controller VM:
1. Cd cos
2. ./
3. This should display that all the services are started successfully.
4. Run the start-all script again if any error comes. Next time, the service should start successfully.

Start the Drivers

On the Driver VM’s:
1. cd cos
2. ./
3. This should display that all the services are started successfully.
4. Run the start-all script again if any error comes. Next time, the service should start successfully.

Run the Performance Test

Once the services are started successfully on both Controller and Driver VM’s. You can launch the Web Browser from any Machine that is in same LAN as that is the Controller VM. The URL:


Follow the ConBench User Guide to do Configure the workloads

Wednesday, 11 July 2018

VNX Unified Power On and Power Off Procedure


  1. Verify that there are no faults on the storage
  2. Before powering down the storage:
    • Stop any replication
    • Ensure Hosts are not accessing the data
Power-Off Control Station and Data movers
  1. Connect to the Control Station using Putty. Login using nasadmin and su to root user
  2. Halt the Control Station and Data Movers using below command. Enter Yes when prompted to power off.
    • # /nasmcd/sbin/nas_halt now
  3. This may take around 15-20 minutes max. Verify the Power LED on data movers and CS
Note: On the front of a successfully powered down DME, the enclosure amber fault LED is lit and the enclosure power LED is Not lit.
Only the networking indication LED(last LED in front from right) may be lit on a successfully powered down Control Station. All other LEDs on the front of the Control Station are off.

Power-Off Storage Processors
  1. Power-Off the Storage Processors by turning the SPS Switches to off position on both SPS. This should take around 5-10 minutes
  2. Remove the Power Cables


Note: Do not connect the power cables of all the components at once
  1. Turn on the SPS power switches to ON position. Wait for both SPs to boot 10-15 min.
  2. Connect the power cables of Data Movers to the PDU and wait for data movers to power up.
  3. Wait around 5 minutes to ensure data movers have booted up.
  4. Connect the Power cables for the Control Station. You may need to push the power button.
  5. Once all the components are up verify the health from Unisphere.

Monday, 4 June 2018

Initialize a Unity Storage without Network Connection

Problem Statement:

You have bought a new DELL EMC Unity Storage. Due to some policies, you cannot install Connection Utility or Command Line tool to initialize the Storage.

How will you proceed forward !!


" Unity Storage has Service IP address hard coded on the Service Port. However, you cannot directly access the IP address." 

So How does it works ???

1. You need to first download the IPMI Tool from ""  site. After downloading Install this. Default path of installation is C:\IPMItool

2. Have the Service IP Information noted down:

SP A Service IP:
SP B Service IP:

3. Connect to SP A or SP B Service Port Using the LAN Cable to your laptop. 

Service Port is the port on left bottom of below pic marked with a wrench symbol

4. Assign below IP address to the Laptop

Laptop IP Address: / Subnet:

5. Open Command Prompt and change the default location to C:\IPMItool. Run below command:

ipmitool.exe -I lanplus -C 3 -U console -P <passwordbg> -H <host> sol activate

Password is the Serial No of Unity Storage. You can find it on PSNT Tag on back or front side of storage physically.
Host is the IP address of Service SP A or SP B mentioned above already.

6. Once you run the above command, you will see the SP Login Prompt. Enter the default credentials. 

One fun exercise here... Username is admin or service. You need to find out what the default password is !!!

7. Run below command to configure the IP address and Hostname for your Unity Storage:

svc_initial_config –a –f <hostname> -n “<IP Address> <Netmask> <Gateway>”

Hostname is the hostname you want to confgure
IP address is the Mgmt IP address
Netmask is Subnet Mask
Gateway is default gateway

After the command succeeds. You need to connect to the Management port of SP and ping the configured IP address.

All the best !

Monday, 29 January 2018

DELL EMC Unity Sizer

Unity Sizer is your one stop destination for all the best practices for sizing configuration for Unity Storage.

You must have DELL EMC Support Account to access the tool.

Here you go !!


Thursday, 4 January 2018

Add a meta device with soft partitions in Main Mirror (Solaris SVM)


Adding a meta device with soft partitions to Main Mirror in Solaris SVM.


Storage LUN (c4t60000970000297000217533030373445d0s0) is under control of metadevice d888 which is having two soft partitions d890 and d891. Customer want to mirror this LUN to another LUN from different storage. For mirroring, we need to add both the LUNs under a Main Mirror in SVM. However, the source LUN is having soft partitions so we cannot simply add this LUN under main mirror.


1.      Remove the Soft partitions from the meta device and note down the starting and ending block details.
2.      Add the Meta Device under the control of Main Mirror.
3.      Create the Soft partitions on the Main Mirror.
4.      Create Meta Device on new LUN and add it under Main Mirror.

Note: This process involves downtime because the mount points needs to be unmounted which are using the soft partitions. Before initiating with the procedure, take latest backup of these mount points.


1.      Note down the MetaDevice and soft partition details from the output of metastat –p command.

d888 1 1 c4t60000970000297000217533030373445d0s0
d891 -p d888 -o 104861504 -b 102760448
      d890 -p d888 -o 3872 -b 104857600

Here d888 is the Meta Device and d890 and d891 are the soft partitions and they are mounted on the server as below:

/dev/md/dsk/d890        49G   1.0G    48G     3%    /test1
/dev/md/dsk/d891        48G    49M    48G     1%    /test2

2.      Unmount the mount points
3.      Clear the Soft Partitions

root@test # metaclear d890
d890: Soft Partition is cleared
root@test # metaclear d891
d891: Soft Partition is cleared

4.      Add the metadevice d888 to Main Mirror d880

root@test # metainit d880 -m d888
d880: Mirror is setup

5.      Verify the Main Mirror Properties of d880

root@gnnems25 # metastat d880
d880: Mirror
            Submirror 0: d888
            State: Okay
            Pass: 1
            Read option: roundrobin (default)
            Write option: parallel (default)
            Size: 209710080 blocks (99 GB)

d888: Submirror of d880
            State: Okay
            Size: 209710080 blocks (99 GB)
            Stripe 0:
              Device                                    Start Block  Dbase        State Reloc Hot Spare
        c4t60000970000297000217533030373445d0s0          0     No            Okay   Yes

6.      Create the soft partitions on the Main Mirror referring the output of metastat –p in step 1.

root@test # metainit d891 -p d880 -o 104861504 -b 102760448
d891: Soft Partition is setup
root@test # metainit d890 -p d880 -o 3872 -b 104857600
      d890: Soft Partition is setup

7.      Mount the Soft partitions to verify the data integrity.

root@test # mount /dev/md/dsk/d890 /test1
root@test # mount /dev/md/dsk/d891 /test2

Verify the Size details in df –h, which will be same as earlier and data will be accessible.

/dev/md/dsk/d890        49G   1.0G    48G     3%    /test1
/dev/md/dsk/d891        48G    49M    48G     1%    /test2

8.      Create MetaDevice on the new LUN and add the new MetaDevice under main mirror d880  using standard procedure.