Monday, December 4, 2023

PowerScale Isilon Node Firmware Upgrade

 

isilon firmware upgrade

Before you install a node firmware package, make sure that you have the necessary
free space in your /var directory.
In order to successfully install a node firmware package, you must have at least
250MB of free space in the /var directory of every node you are working on.
You can find the free space in the /var directory by running the following command:
df -h

1. Download the new firmware package.
a. Visit EMC online support and download the latest firmware package.
b. Open a secure shell (SSH) connection to any node in the cluster and log in using
the "root" account.
c. Copy the firmware package to the /ifs/data directory on the cluster.
2. Install the firmware package. Depending on your version of OneFS, run one of the
following commands:
OneFS 8.0 or later
isi upgrade patches install IsiFw_Package_<versionnumber>.tar
Earlier than OneFS 8.0
isi pkg install IsiFw_Package_<version-number>.tar
The cluster displays a message stating that the firmware package was successfully
installed.

After the node restarts, confirm that the NVRAM firmware matches the installed
firmware package. Depending on your version of OneFS, run one of the following
commands:
OneFS 8.0 or later
isi upgrade cluster firmware devices
Earlier than OneFS 8.0
isi firmware status
If the NVRAM firmware still does not match the installed firmware package, you must
run the firmware update a second time. Depending on your version of OneFS, run one
of the following commands:
OneFS 8.0 or later
isi upgrade cluster firmware start
Earlier than OneFS 8.0
isi firmware update
update status
node1# while :; do clear; date; isi upgrade view; isi up no li|grep "LNN\|State"|paste - -; echo; isi up no fi pr li | awk '{if (($4 != "-" && $4 != $3) || ($5 != "-" && $4 != "-")) print}'; sleep 30; done

Wed Nov 29 15:22:08 CST 2017

Upgrade Status:

   Cluster Upgrade State: committed
Current Upgrade Activity: Firmware
      Upgrade Start Time: 2017-11-28T14:18:45
   Upgrade Finished Time: 2017-11-28T14:28:25
      Current OS Version: 8.0.0.5_build(81)style(5)
      Upgrade OS Version: N/A

Nodes Progress:

     Total Cluster Nodes: 3
       Nodes On Older OS: 3
          Nodes Upgraded: 0
Nodes Transitioning/Down: 0

If there are any errors please run "isi_upgrade_logs" to gather more information.

             Node LNN: 1           Node Upgrade State: committed
             Node LNN: 2           Node Upgrade State: upgrade ready
             Node LNN: 3           Node Upgrade State: upgrade ready

Lnns  Device               Old Version             New Version             Status
------------------------------------------------------------------------------------
1     CMC_Yeti             02.07                   02.07                   upgraded
2     CMC_Yeti             02.05                   02.07                   upgrading
3     CMC_Yeti             02.05                   02.07                   -
------------------------------------------------------------------------------------
Total: 36

node1#isi upgrade patches uninstall IsiFw_Package_<versionnumber>.tar

node1# isi upgrade patches list
Patch Name Description Status
-----------------------------

-----------------------------

node1# cd /ifs/data/Isilon_Support


node1# vi PRSisiHealth

node1#perl PRSisiHealth -u 8.0.0.5

Dell EMC Remote Proactive Health Check            0.1163
Live Cluster Analysis                             Wed Nov 29 15:47:39 2017
Cluster Name                                      gucfs38d
Node Count                                        3
Current OneFS Version                             8.0.0.5
Target OneFS Version                              WARN
  WARN: OneFS target version 8.0.0.5 is less than the current OneFS version, performing checks with no target OneFS version.
OneFS Version                                     PASS
Highly Recommended Patches                        PASS
Cluster Capacity                                  PASS
Cluster Health Status                             FAIL
  FAIL: The cluster health is ATTN
  FAIL: Node 1 is reporting ATTENTION
  FAIL: Node 2 is reporting ATTENTION
  FAIL: Node 3 is reporting ATTENTION
  INFO: Refer to KB210505 (https://support.emc.com/kb/210505) for details.
Critical Events                                   FAIL
  FAIL: Critical event 2059 for node 2: External network link ext-1 (igb0) down
  FAIL: Critical event 2061 for node -1: External network link ext-1 (igb0) down
  FAIL: Critical event 2147 for node 3: External network link ext-1 (igb0) down
  FAIL: Critical event 2168 for node 1: External network link ext-1 (igb0) down
  INFO: Refer to KB210506 (https://support.emc.com/kb/210506) for details.
Jobs Status                                       PASS
System Partition Free Space                       PASS
Cluster Services                                  PASS
Processes                                         PASS
Node Uptime                                       PASS (0 days)
Upgrade Status                                    PASS
Hardware Status                                   PASS
BMC/CMC Hardware Monitoring                       PASS
Boot Disks Life Remaining                         PASS
Mirror Status                                     PASS
Memory                                            PASS
Drives Health                                     PASS
Drives Firmware (DFP 1.18/DSP 1.21)               INFO
  INFO: Model                          Firmware   DSP(1.21)  DFP(1.18)  Count Nodes
  INFO: HGST HUSMM1640ASS200           A204       -          -          3     1-3
  INFO: ST2000NM0055-1V4104            BL06       -          -          105   1-3
  INFO: Refer to KB210512 (https://support.emc.com/kb/210512) for details.
Node Firmware (10.1.1)                            PASS
Node Compatibility                                PASS
SmartConnect Service IP                           PASS
Duplicate Gateway Priority                        PASS
SyncIQ                                            PASS
Authentication Status                             PASS
Licenses                                          PASS
Access Zones                                      PASS (1)
Aspera                                            PASS
Cluster Encoding                                  PASS (utf-8)
Time Zone                                         PASS (Asia/Taipei)
DialHome & Remote Connectivity                    INFO
  INFO: Current states:
  INFO:    ConnectEMC is Disabled
  INFO:    ESRS is not enabled
ETAs                                              PASS
BXE Nodes                                         INFO (3)
  INFO: Nodes that have BXE interfaces: 1-3


UPGRADE ISSUE DETECTED

Wednesday, November 8, 2023

Brocade SAN Fabric Zoning

 

Zoning Brocade switches: zoning overview

Storage area networks (SANs) are deployed at most larger organizations, and provide centralized administration for storage devices and management functions. When multiple clients are accessing storage resources through a SAN, you need a way to limit which targets and logical units each initiator can see. Typically LUN masking is used on the storage array to limit which initiators can see which logical units, and zoning is used on the SAN switches to limit which initiators can see which targets. In the next five blog posts, I plan to provide a step-by-step guide to zoning Brocade switches.

Brocade zoning comes in two main flavors. There is hard zoning (port-based zoning), which allows you to create a zone with a collection of switch ports. The second zoning method is soft zoning (WWN-based zoning), which allows you to create a zone with one or more WWNs. There are tons of documents that describe why you would want to use each form of zoning. I typically use the following two rules to determine which zoning method I will use:

1. Will I ever need to move the host to a different switch or port? If so, I will implement soft zoning.

2. Are there any policies that require me to lock an initiator to a specific port? If so, I will use hard zoning.

I prefer soft zoning, since it provides tons of flexibility when dealing with switch upgrades, faulty SFPs, and defective hardware. But each location has different policies, so it’s best to to take that into account each time you design or implement your zone layout.

To implement zoning on a Brocade switch, the following tasks need to be performed:

1. Add aliases for each port / WWN

2. Add the aliases to a zone

3. Add the zone to a configuration

4. Save and enable the new configuration

Brocade provides awesome zoning documentation, which you can access though the help and zonehelp commands:

 

Zoning Brocade switches: creating aliases

In my previous Brocade post, I talked about Brocade zoning, and mentioned at a high level what is required to implement zoning. Prior to jumping in and creating one or more zones in your fabric, you should add aliases to describe the devices that are going to be zoned together. An alias is a descriptive name for a WWN or port number, which makes your zone configuration much easier to read (if you are the kinda person who can spout off the WWNs of all of the devices in your fabric, you can kindly ignore this post). Brocade switches come with a number of commands to manage aliases, and these commands start with the string “ali”:

aliCreate – Creates a new alias
aliDelete – Deletes an alias
aliRemove – Removes an entry from an alias
aliRename – Renames an existing alias
aliShow – Shows the aliases

To create a new alias, you will first need to locate the WWN(s) or port(s) you want to assign to the alias. The easiest way to do this is by running switchshow on the switch (you can also use the Emulex or QLogic host utilities to gather WWN information):

Fabric1Switch1:admin> switchshow

switchName:    Fabric1Switch1
switchType:    16.2
switchState:   Online   
switchMode:    Native
switchRole:    Principal
switchDomain:  1
switchId:      fffc01
switchWwn:     10:00:00:60:69:c0:32:a4
switchBeacon:  OFF
Zoning:        ON (Brocade3200)
port  0: id N2 Online         F-Port 10:00:00:00:c9:3e:4c:eb
port  1: id N2 Online         F-Port 10:00:00:00:c9:3e:4c:ea
port  2: id N2 No_Light       
port  3: id N2 No_Light       
port  4: id N2 Online         F-Port 21:00:00:e0:8b:1d:f9:03
port  5: id N2 Online         F-Port 21:01:00:e0:8b:3d:f9:03
port  6: id N2 No_Light       
port  7: id N2 No_Light       



Once you know the port numbers or WWNs, you can run the alicreate command, passing it the name of the alias to create, as well as the port or WWN to associate with the alias (if you assign more than one port or WWN to the alias, they need to be separated with a semi-colon):

Fabric1Switch1:admin> alicreate “CentosNode2Port1, 21:00:00:e0:8b:1d:f9:03

After an alias is created, you can view it with the alishow command:

Fabric1Switch1:admin> alishow “CentosNode2Port1
alias: CentosNode2Port1
21:00:00:e0:8b:1d:f9:03

If you make a typo while adding a WWN or port to an alias, you can run aliadd to add the correct WWN or port to the alias, and then execute aliremove to remove the entry that was incorrectly added. If you make a typo in the alias name, you can run alirename to rename the entry. That is all for today. In my next blog post, I will talk about how to create zones.

 

Zoning Brocade switches: creating zones

I previously talked about creating aliases on Brocade switches, and am going to use this post to discuss zone creation. Zones allow you to control initiators and targets can see each other, which enhances security by limiting access to devices connected to the SAN fabric. As previously discussed, we can assign an alias to each initiator and target. Once an alias is assigned, we can create a zone and add these aliases to it. Brocade managed zones with the zone* commands, which are listed below for reference:

zoneadd – Add a member to an existing zone
zoneCopy – Copy an existing zone
zonecreate – Create a new zone
zoneDelete – Delete a zone
zoneRemove – Remove a one from the configuration
zoneRename – Rename a zone
zoneShow – Show the list of zones

To create a new zone, we can run the zonecreate command with the name of the zone to create, and the list of aliases to add to the zone:

Fabric1Switch1:admin> zonecreate “CentOSNode2Zone1, NevadaPort1; CentosNode2Port1

Once the zone is created, we can view it with the zoneshow command:

Fabric1Switch1:admin> zoneshow “CentOSNode2Zone1

 zone:  CentOSNode2Zone1       
               NevadaPort1; CentosNode2Port1



Now that we have a zone, we need to add it to the switch configuration and then enable that configuration. I will discuss that in more detail when I discuss managing Brocade configurations.

 

 

Zoning Brocade switches: Creating configurations

I’ve previously talked about creating Brocade aliases and zones, and wanted to discuss zone configurations in this post. Brocade zone configurations allow you to group one or more zones into an administrative unit, which you can then apply to a switch. Brocade has a number of commands that can be used to manage configurations, and they start with the string “cfg”:

cfgadd – Add a member to the configuration
cfgcopy – Copy a zone configuration
cfgcreate – Create a zone configuration
cfgdelete – Delete a zone configuration
cfgremove – Remove a member from a zone configuration
cfgrename – Rename a zone configuration
cfgshow – Print zone configuration

To create a new configuration, you can run the cfgcreate command with the name of the configuration to create, and an initial zone to place in the configuration:

Fabric1Switch1:admin>cfgcreate “SANFabricOne”, “CentOSNode1Zone1

Once the configuration is created, you can add additional zones using the cfgadd command:

Fabric1Switch1:admin> cfgadd “SANFabricOne”, “CentOSNode1Zone2

To ensure that your changes persistent through switch reboots, you can run cfgsave to write the configuration to flash memory:

Fabric1Switch1:admin> cfgsave

Starting the Commit operation...
0x102572c0 (tRcs): May  8 08:51:37
    INFO ZONE-MSGSAVE, 4, cfgSave completes successfully.
 
cfgSave successfully completed



To view a configuration, you can run the cfgshow command:

Fabric1Switch1:admin> cfgshow

Defined configuration:
 cfg:   SANFabricOne   
               CentOSNode1Zone1; CentOSNode1Zone2; CentOSNode2Zone1; 
               CentOSNode2Zone2
 zone:  CentOSNode1Zone1       
               CentOSNode1Port1; NevadaPort1
 zone:  CentOSNode1Zone2       
               CentOSNode1Port2; NevadaPort2
 zone:  CentOSNode2Zone1       
               NevadaPort1; CentosNode2Port1
 zone:  CentOSNode2Zone2       
               NevadaPort2; CentosNode2Port2
 alias: CentOSNode1Port1       
               21:00:00:1b:32:04:86:c3
 alias: CentOSNode1Port2       
               21:01:00:1b:32:24:86:c3
 alias: CentosNode2Port1       
               21:00:00:e0:8b:1d:f9:03
 alias: CentosNode2Port2       
               21:01:00:e0:8b:3d:f9:03
 alias: NevadaPort1    
               10:00:00:00:c9:3e:4c:eb
 alias: NevadaPort2    
               10:00:00:00:c9:3e:4c:ea
 
Effective configuration:
 cfg:   SANFabricOne   
 zone:  CentOSNode1Zone1       
               21:00:00:1b:32:04:86:c3
               10:00:00:00:c9:3e:4c:eb
 zone:  CentOSNode1Zone2       
               21:01:00:1b:32:24:86:c3
               10:00:00:00:c9:3e:4c:ea
 zone:  CentOSNode2Zone1       
               10:00:00:00:c9:3e:4c:eb
               21:00:00:e0:8b:1d:f9:03
 zone:  CentOSNode2Zone2       
               10:00:00:00:c9:3e:4c:ea
               21:01:00:e0:8b:3d:f9:03



Now you may notice in the output that there is a defined and effective configuration. The effective configuration contains the configuration that is currently running on the switch, and the defined configuration contains the configuration that is saved in flash. To make the configuration in flash effective, the cfgenable command needs to be run (this should be run after you make alias/switch/configuration changes and issue a cfgsave):

Fabric1Switch1:admin> cfgenable “SANFabricOne”
Starting the Commit operation…
0x1024f980 (tRcs): Apr 29 20:44:39
INFO ZONE-MSGSAVE, 4, cfgSave completes successfully.

cfgEnable successfully completed



Once the cfgenable runs, the effective configuration will be updated to match the configuration you have defined and saved. This completes this part of the Brocade series, and the final installation will cover switch backups and putting all the pieces together.

Tuesday, April 11, 2017

Usable Capacity for EMC disk


Usable Drives Capacity for EMC Storage







VNX Unified: How to gather NAR files from control station of VNX


To gather the NAR files from the control station of a VNX array the below steps to be performed.

NAR file generation for a performance analysis of the array.

1. Before you start generating the NAR files from the control station check if there are any files which has been already started  or stopped so that we can retrieve it based on the period required.

This can be checked using the below command:

/nas/sbin/navicli -h SPA analyzer -status [checks if analyzer is running and when it was started or stopped]
/nas/sbin/navicli -h SPA analyzer -archive -list -to list the NAR files if generated earlier.
 
[nasadmin@EMCCS1 ~]$ /nas/sbin/navicli -h SPA analyzer -archive -list
No files found.

2. If there are no files present in the array follow the below steps to generate the NAR files and then retrieve them based on the requirement on the time period.
 ++check the interval set for the log collection especially rtinterval, narinterval and Periodic Archiving.The below command can be run in one single command. Periodic Archive must always be Enabled in order to create new NAR files automatically.

/nas/sbin/navicli -h spa analyzer -get -narinterval/nas/sbin/navicli -h spa analyzer -get -rtinterval/nas/sbin/navicli -h spa analyzer -get -logperiod/nas/sbin/navicli -h spa analyzer -get -periodicarchiving/nas/sbin/navicli -h spa analyzer -status

OUTPUT:
 
[nasadmin@EMCCS1 ~]$ /nas/sbin/navicli -h SPA analyzer -get -narinterval
Archive Poll Interval (sec):  300 (note that 300 seconds is the default setting - for more detailed NAR data set the -narinterval to 60 seconds)

[nasadmin@EMCCS1 ~]$ /nas/sbin/navicli -h SPA analyzer -get -rtinterval
Real Time Poll Interval (sec):  300

[nasadmin@EMCCS1 ~]$ /nas/sbin/navicli -h spa analyzer -get -logperiod
Current Logging Period (day):  nonstop (setting the -logperiod to nonstop along with Periodicarchiveing to enabled will allow for the automatic generation of sequential NAR files over the period set to run)

[nasadmin@EMCCS1 ~]$ /nas/sbin/navicli -h SPA analyzer -get -periodicarchiving
Periodic Archiving:  Yes

[nasadmin@EMCCS1 ~]$ /nas/sbin/navicli -h SPA analyzer -status
Running. Started on 01/20/2017 09:30:18

3. In this step we can change the settings  like the timeinterval the logperiod for the log colleection as per the requirement.
++For example if the logs are required for the 7 days then the log period can be set to 7.

 
/nas/sbin/navicli -h spa analyzer -set -narinterval 60/nas/sbin/navicli -h spa analyzer -set -rtinterval 60/nas/sbin/navicli -h spa analyzer -set -logperiod 7; -->Even this command can be run as a single command.

4. Once everything is set then we can start the NAR file generation using the below command and retrieve it once complete.


/nas/sbin/navicli -h SPA analyzer -start [starts analyzer]

For retrieveing the logs:


/nas/sbin/navicli -h SPA analyzer -archive -list [lists all available NAR/NAZ files]
/nas/sbin/navicli -h SPA analyzer -archive -file <filename> [retrieves NAR with filename to current directory]


 

Commonly used commands:

/nas/sbin/navicli -h SPA analyzer -status
 [checks if analyzer is running and when it was started or stopped]
/nas/sbin/navicli -h SPA analyzer -start [starts analyzer]
/nas/sbin/navicli -h SPA analyzer -set -nartinerval <seconds>
 [sets archive polling interval to number of seconds]
/nas/sbin/navicli -h SPA analyzer -archive -list [lists all available NAR/NAZ files]
/nas/sbin/navicli -h SPA analyzer -archive -file <filename> [retrieves NAR with filename to current directory]







Saturday, February 25, 2017

What are the recommended system read/write cache values for VNX arrays?

Issue:
VNX Block-only arrays ship by design with zero read and write cache values.
VNX Unified/File system installations set read and write cache values contrary to "best practices"


Cause:
This article offers basic guidance for suggested cache settings on VNX Block systems at installation time. VNX Block-only systems ship from the factory without any Read/Write cache values assigned, and with cache totally disabled on both storage processor (SPs). The Block-only versions of the VNX Installation Guides instructs users to setup the initial Read/Write cache settings for the array during the installation process. In addition, users should understand that the initial cache settings may need to be changed once normal and proper system operation/performance has been determined.

For VNX File/Unified systems, the installation of the File component [as a File re-install or as a Block-to-Unified upgrade], at File OE versions 7.0.12.0/7.0.13.0/7.0.14.0, sets the Read cache values on the SPs to a flat value of 512 MB/SP on all Unified models. However, this behavior contradicts theen standard Best Practice values for VNX arrays. Users need to be aware that they may need to manually set the Read cache value back to the desired settings after a File install.
With VNX File OE version 7.0.35.3 and later, the Block-to-Unified upgrade, or a File re-install, will not change any array cache values, if already set. However, where array cache values have not already been set, a File system installation will appropriately set the array cache values to the recommended settings outlined in the Table below. Basically, Read cache values will be set at 10% of available cache (with a Read minimum of 256 MB, and a Read maximum of 1024 MB), and the
remainder of cache assigned to Write cache as described in the table below.

Resolution:
The following recommended default cache settings (by model) should be considered as a starting point, and are valid for a Block-only or Unified installation.
Refer to the VNX Best Practices Guide for guidance on how to adjust these settings for your particular workload. Refer to knowledge base
78781 for examples
of how to change Read and Write cache values using Unisphere. 

Per Storage 
Processor Memory                        VNX5100  VNX5300  VNX5500  VNX5700  VNX7500

Max. Cache/SP (GB)                          4               8                 12              18             24/48*
Max. Write Cache (MB)
Release 32 without enablers              950           3997           6488            9706        13450/-
Max. Write Cache (MB)
Release 32 and data services**
       
650            2782           4738           6581         10600/16600 

 
 Allocate 10% of available cache to read (Read cache is set per SP, with a minimum of 256 MB, except for the VNX5100, and a max of 1024 MB), and the rest to write (Write cache is set per array). Please note that certain Array features, such as FAST VP and Compression, will reduce the overall memory available for Read and Write cache, hence the minimum 256 MB Read cache reference. Setting read to 10% (with min and max) would lead to the following:
  • VNX5100 - Read 100 MB [Block-only system]
  • VNX5300 - Read 400 MB [Block or Unified/File]
  • VNX5500 - Read 700 MB [Block or Unified/File]
  • VNX5700 - Read 1024 MB [Block or Unified/File]
  • VNX7500 - Read 1024 MB [Block or Unified/File]  



Wednesday, February 25, 2015

How to gather SPcollect or Service Data Logs for EMC VNXe Series

Product: EMC VNXe Series

Resolution:
There are two ways of gathering Service Data logs collect.
  1. Collecting from Unisphere GUI (Recommended Method)
  2. Collecting logs through SSH
Collecting from Unisphere GUI (Recommended Method):
  1. Log in to the Unisphere GUI with admin credentials.
  2. Click on Settings and then on Service system.
  3. Enter service password.
  4. Under "System Components" highlight "Storage System".
  5. Select "Collect Service Information " under "Service Actions."
  6. Click "Execute service action."
  7. This message is displayed: "The service data has previously been collected and is available for download. Do you want to download this existing service data or start a new process to collect new service data? Click Yes to download the existing service data file or No to start a new collection of service data."
  8. Select Yes or No as appropriate to your situation.
  9. Click Yes to save the files to your hard drive.

Collecting logs through SSH:
  1. Open an SSH tool (like Putty).
  2. Connect to VNXe management IP through Putty.
  3. Log in as the service user.
  4. Run the svc_dc command. It will take couple of minutes to complete.
  5. Use third party scp/sftp tools like WinSCP or FileZilla to connect to VNXe management IP (log on as service).
  6. Browse to /EMC/backend/service/data_collection on VNXe.
  7. Copy the .tar file to your local desktop (file name looks like VNXe3100_service_data_APM00113200784_2012-01-11_21_23_26.tar).

How to gather SPcollect files or Logs for EMC VNX or Clarrion Series Array

Summary: This article describes how to gather SPcollects from a VNX Series array (including arrays with MCx), using either Unisphere, USM or Navisphere Secure CLI. Retrieving diagnostic data from a Unified array using Unisphere.

Environment: EMC Hardware: VNX Series
                       EMC Hardware: VNX Series with MCx
                       Product: VNX5100
                       Product: VNX5200
                       Product: VNX5300
                       Product: VNX5400
                       Product: VNX5500
                       Product: VNX5600
                       Product: VNX5800
                       Product: VNX5700
                       Product: VNX7500
                       Product: VNX7600
                       Product: VNX8000
                       EMC Hardware: VNX Unified/File 
                       EMC Hardware: VNX Block
                       EMC Software: VNX Operating Environment (OE) for Block 05.32
                       EMC Software: VNX Operating Environment (OE) for Block 05.33 and later
                       EMC Software: Unisphere
                       EMC Software: Unisphere Service Manager
                       EMC Software: Navisphere Secure CLI

Resolution: There are a number of methods to gather SPcollects:
  1. Start and retrieve SPcollects from each SP using Unisphere.
  2. Launch Unisphere Service Manager either directly or from within Unisphere. This approach has the advantage of automating the whole SPcollect gathering process and gathering File diagnostic data too.
  3. Start and retrieve SPcollects from each SP using Navisphere Secure CLI.
Unisphere:
  1. Launch Unisphere and login.
  2. Select the VNX series array from either the dashboard or from the Systems drop-down menu. Click System on the toolbar.
  3. On the right pane, under Diagnostic Files, select 'Generate Diagnostic Files - SPA'. Confirm that it is OK to continue.  "Success" will be displayed when the SPcollect starts, but this only means the script has been started and will still take several minutes to complete.
  4. Repeat step 3 for SP B.
  5. It will take around 15 minutes to generate a complete SPcollect file.
  6. Still on the right pane, select 'Get Diagnostic files - SP A'.
  7. When the SPcollect file has completed, a file with the following name will be listed: <ArraySerialnumber>_SPA_<date_time(GMT)_code>_data.zip
  8. Sorting by descending order of date is a good way to find the latest SPcollect and the zip file will generally be over 10MB.  If the file has not appeared, press refresh every minute or so until the correct _data.zip file appears.
  9. On the right-hand side of the box, select the location on the local computer, where the SPcollects should be transferred to.
  10. On the left hand side of the box select the file to be transferred.  Note, if a file is listed that ends in runlog.txt, this indicates that the SPcollects are still running. Wait until the data.zip is created.
  11. Repeat Steps 6-10 on SP B to retrieve its diagnostic files.
Unisphere Service Manager:
  1. Log in to Unisphere client.
  2. Select the VNX, either from the dashboard or from the Systems drop-down. Click System on the toolbar.
  3. On the right pane, under Service Tasks, select 'Capture Diagnostic Data'.  This will launch USM.  Alternatively USM can be launched directly from the Windows Start menu.
  4. Select the Diagnostics tab and select ‘Capture Diagnostics Data’.  This will launch the Diagnostic Data Capture Wizard.
  5. The Wizard will capture and retrieve SPcollect files from both SP and Support Materials from the File storage, which will then be combined into a single zip file.
Navisphere Secure CLI:
Perform the following steps:
  1. Open a command prompt on the Management Station.
  2. Type cd "C:\Program Files\EMC\Navisphere CLI" - This is the default installation folder for Windows, but the path the file was installed to may have been overridden.  Other platforms, such as Linux, would have a different folder structure, but the commands are the same. The CLI folder may already be in the path statement, in which case, the commands can be run from any directory.
  3. Type naviseccli -h <SP_A_IP_address> spcollect
  4. Type naviseccli -h <SP_B_IP_address>spcollect
  5. These commands start the SPcollect script on each SP.  Additional security information may also need to be specified
  6. Wait at least 10 minutes for the SPcollects to run, before attempting to retrieve them.
  7. Type naviseccli -h <SP_IP_address>managefiles -list
  8. This will list the files created by SPcollect.  Check that a file with the current date and time in GMT has been created, ending with _data.zip.  If there is a file ending with .runlog instead, then the SPcollect is still running, so wait for a while longer before retrying this.
  9. Type naviseccli -h <SP_IP_address>managefiles -retrieve
          This will display the files that can be moved from the SP to the Management Station.
          Example:
          Index Size in KB     Last Modified            Filename
          0     339       06/25/2013 00:45:42  admin_tlddump.txt
           ...
          10    24965     06/24/2013 23:39:53  FNM00125001234_SPB_2013-06-24_22-38- 15_32a007_data.zip
          11    41577     06/25/2013 00:17:17  FNM00125001234_SPB_2013-06-24_23-10-55_32a007_data.zip
           ...
    10.  Enter files to be retrieved with index separated by comma (1,2,3,4,5) OR by a range (1-3) OR enter 'all' to retrieve all file OR 'quit' to quit> 11
This will pull the index number 10 (the most recent ~_data.zip file) from the corresponding SP and copy it to the c:\program files\emc\navisphere cli directory, with a filename of FNM00125001234_SPB_2013-06-24_23-10-55_32a007_data.zip


Notes: While an SPcollect is running, a file ending runlog.txt will be visible.  This just tracks the progress of the SPcollect and will disappear once the SPcollect completes (it gets moved into the SPcollect data.zip file). There is no need to upload the runlog.txt by itself and it will be necessary to wait until the SPcollect completes on each SP, before uploading the data.zip files.