Saturday, November 10, 2012

isilon

<-- Please click if you found this site useful ;-) If you want to add comments, please use Google's SideWiki.

Isilon

101


isilon stores both windows sid and unix uid/gid with each file.
When nfs client look at file created on windows, file may not have uid/gid in it.   
isilon looks up the conversion from its mapping db.
if it can't find one, it will generate a number, starting at 10000.


maintenance commands

isi_gather_info  # collect status of cluster and send to support (usually auto upload via ftp)

HD Replacement

isi devices     # list all devices of the node logged in
isi devices -a status -d 14:bay28 # see statys of node 14, drive 28
isi devices -a add    -d 14:28  # add the drive (after being replaced)
isi devices -a format -d 14:28  # often need to format the drive for OneFS use first
     # it seems that after format it will automatically use drive (no ADD needed)

# other actions are avail, eg smartfail a drive.

isi_for_array -s 'isi devices | grep -v HEALTHY' # list all problematic dev across all nodes of the cluster.


isi statistics drive --long  # 6.5 cmd to see utilization of a hd.

user mapper stuff


id username
id windowsDomain\\windowsUser
    # Note that, username maybe case sensitive!!

isi auth ads users  list --uid=50034
isi auth ads users  list --sid=S-1-5-21-1202660629-813497703-682003330-518282
isi auth ads groups list --gid=10002
isi auth ads groups list --sid=S-1-5-21-1202660629-813497703-682003330-377106

isi auth ads user list -n=ntdom\\username


# find out Unix UID mapping to Windows SID mapping:
# OneFS 6.5 has new commands vs 6.0
isi auth mapping list  --source=UID:7868
isi auth mapping rm    --source=UID:1000014
isi auth mapping flush --source=UID:1000014   # this clear the cache
isi auth mapping flush --all 
isi auth local user list -n="ntdom\username" -v # list isilon local mapping

isi auth mapping delete --source-sid=S-1-5-21-1202660629-813497703-682003330-518282 --target-uid=1000014 --2way
 # should delete the sid to uid mapping, both ways.
isi auth mapping delete --target-sid=S-1-5-21-1202660629-813497703-682003330-518282 --source-uid=1000014
 # may repeat this if mapping not deleted.

isi auth mapping dump | grep S-1-5-21-1202660629-813497703-682003330-518282

isi auth ads group list --name

isi auth local users delete --name=ntdom\\username --force

rcf2307 is prefered auth mechanism... windows ad w/ services for unix.


isi smb permission list --sharename=my_share






# finding windows sid??  rm --2way ??



    # find out Unix UID mapping to Windows SID mapping:
    # OneFS 6.0: 
    isi auth ads users map list --uid=7868
    isi auth ads users map list --sid=S-1-5-21-1202660629-813497703-682003330-305726

    isi auth ads users map delete --uid=10020
    isi auth ads users map delete --uid=10021
    isi_for_array -s 'lw-ad-cache --delete-all'  # update the cache on all cluster node 
    # windows client need to unmap and remap drive for new UID to be looked up.

    # for OneFS 6.0.x only (not 6.5.x as it has new CIFS backend and also stopped using likewise)
    # this was lookup uid to gid map.
    
    sqlite3 /ifs/.ifsvar/likewise/db/idmap.db 'select sid,id from idmap_table where type=1;' # list user  sid to uid map
    sqlite3 /ifs/.ifsvar/likewise/db/idmap.db 'select sid,id from idmap_table where type=2;' # list group sid to gid map
    1:  The DB that you are looking at only has the fields that you are seeing listed.  
    With the current output it will give you the SID and UID of the users mapped.  
    With these commands you can find the username that is mapped to that information:
    #isi auth ads users list --uid={uid}
    or
    #isi auth ads users list --sid={sid}

    2:  The entries in the DB are made as the users authenticate to the cluster.  
    So when a client tries to access the share, the client sends over the SID, 
    we check the DB and if no entry is found, we check with NIS/LDAP, 
    if nothing is found there, we generate our own ID (10000 range) and add it to the DB.  
    Any subsequent access from that SID will be mapped to the UID in that DB.

    3:  You can run the following to get the groups and the same rules 
    apply for the GID and SID lookups:
    #sqlite3 /ifs/.ifsvar/likewise/db/idmap.db 'select sid,id from idmap_table where type=2;'
    #isi auth ads groups list --gid={gid}
    #isi auth ads groups list --sid={sid}

    4:  You can delete the entries in the database, 
    but the current permissions on files will remain the same.  
    So when the user re-accesses the cluster he will go through the 
    process outlined in question 1.
    



Snapshot

Snapshots take up space reported as usable space on the fs.
cd .snapshot
Admin can manually delete snapshot, or take snapshot of a specific directory tree instead of the whole OneFS.

CIFS

ACL

ls -led   # show ACL for the current dir (or file if filename given)
ls -l   # regular unix ls, but + after the permission bits indicate presence of CIFS ACL
setfacl -b filename # remove all ACL for the file, turning it back to unix permission 
chmod +a user DOMAIN\\username  allow generic_all /ifs/path/to/file.txt  # place NTFS ACL on file, granting user full access


ls -lR | grep -e "+" -e "/" | grep -B 1 "+"    # recursively list files with NTFS ACL, short version
ls -lR | grep -e "^.......... +" -e "/"  | grep -B 1 "^.......... +" # morse code version, works better if there are files w/ + in the name

Time Sync


isi_for_array -s 'isi auth ads dc' # check which Domain Controller each node is using
isi_for_array -s 'isi auth ads dc --set-dc=MyDomainController # set DC across all nodes
isi_for_array -s 'isi auth ads time'  # check clock on each node

isi auth ads time --sync   # force cluster to sync time w/ DC (all nodes)

isi auth ads status   # check join status to AD
killall  lsassd    # reset daemon, auth off for ~30sec, should resolve offline AD problems

"unix" config

Syslog

isi_log_server add SYSLOG_SVR_IP [FILTER]
-or-
vi /etc/mcp/templates/syslog.conf
isi_for_array -sq 'killall -HUP syslogd'

Disable user ssh login to isilon node

For Isilon OneFS 6.0:
vi /etc/mcp/templates/sshd_config
add line
AllowUsers root@* 
Then copy this template to all the nodes:
cp /etc/mcp/templates/sshd_config /ifs/ssh_config
isi_for_array 'cp /ifs/ssh_config /etc/mcp/templates/sshd_config
One may need to restart sshd, but in my experience sshd pick up this new template in less than a minute and users will be prevented from logging in via ssh.
In OneFS 6.5, maybe the template will be replicated to all nodes? Or maybe that's only for syslogd, but not sshd, as they are concerned it may lock user out from all the nodes from ssh access...

Links

  1. Isilon.com

History

OneFS 5.0
OneFS 5.5
OneFS 6.0 ca 2011.03 - Support mixed type of nodes - IQ 10000 and IQ 6000 in the same cluster. Single host entry in AD for whole Isilon cluster.
OneFS 6.5 EA 2011.04 ? - SSD on the more high end node will catch meta data even for data in lower end node w/o SSD. CIFS is a completely new rewrite, and authentication with AD has changed. Test test test!!
(2011) Acquired by EMC.




[Doc URL: http://dl.dropbox.com/u/31360775/psg/isilon.html]

(cc) Tin Ho. See main page for copyright info.
Last Updated: 2012-06-21

climate prediction banner boinc logo

Valid CSS!

Valid HTML 4.01 Strict


"LYS on the outside, LKS in the inside"
"AUHAUH on the outside, LAPPLAPP in the inside"
psg101 sn50 tin6150

4 comments:

  1. This came in handy for me today thanks for posting!

    ReplyDelete
  2. do you have any updates relevant to isilon 7.x?

    ReplyDelete
  3. Do you know the commands to reset the system read-only mode after replacing the NVRAM batteries? Thank you.

    ReplyDelete