[gpfsug-discuss] Trapped Inodes
Luke Raimbach
Luke.Raimbach at crick.ac.uk
Sun Jul 3 15:55:26 BST 2016
Hi Marc,
Thanks for that suggestions. This seems to have removed the NULL fileset from the list, however mmdf now shows even more strange statistics:
Inode Information
-----------------
Total number of used inodes in all Inode spaces: -103900000
Total number of free inodes in all Inode spaces: -24797856
Total number of allocated inodes in all Inode spaces: -128697856
Total of Maximum number of inodes in all Inode spaces: -103900000
Any ideas why these negative numbers are being reported?
Cheers,
Luke.
From: gpfsug-discuss-bounces at spectrumscale.org [mailto:gpfsug-discuss-bounces at spectrumscale.org] On Behalf Of Marc A Kaplan
Sent: 02 July 2016 20:17
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org>
Subject: Re: [gpfsug-discuss] Trapped Inodes
I have been informed that it is possible that a glitch can occur (for example an abrupt shutdown) which can leave you in a situation where it looks like all snapshots are deleted, but there is still a hidden snapshot that must be cleaned up...
The workaround is to create a snapshot `mmcrsnapshot fs dummy` and then delete it `mmdelsnapshot fs dummy` and see if that clears up the situation...
--marc
From: Luke Raimbach <Luke.Raimbach at crick.ac.uk<mailto:Luke.Raimbach at crick.ac.uk>>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: 07/02/2016 06:05 AM
Subject: Re: [gpfsug-discuss] Trapped Inodes
Sent by: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
________________________________
Hi Marc,
Thanks for the suggestion.
Snapshots were my first suspect but there are none anywhere on the filesystem.
Cheers,
Luke.
On 1 Jul 2016 5:30 pm, Marc A Kaplan <makaplan at us.ibm.com<mailto:makaplan at us.ibm.com>> wrote:
Question and Suggestion: Do you have any snapshots that might include files that were in the fileset you are attempting to delete? Deleting those snapshots will allow the fileset deletion to complete. The snapshots are kinda intertwined with what was the "live" copy of the inodes. In the GPFS "ditto" implementation of snapshotting, for a file that has not changed since the snapshot operation, the snapshot copy is not really a copy but just a pointer to the "live" file. So even after you have logically deleted the "live" files, the snapshot still points to those inodes you thought you deleted. Rather than invalidate the snapshot, (you wouldn't want that, would you?!) GPFS holds onto the inodes, until they are no longer referenced by any snapshot.
--marc
From: Luke Raimbach <Luke.Raimbach at crick.ac.uk<mailto:Luke.Raimbach at crick.ac.uk>>
To: gpfsug main discussion list <gpfsug-discuss at spectrumscale.org<mailto:gpfsug-discuss at spectrumscale.org>>
Date: 07/01/2016 06:32 AM
Subject: [gpfsug-discuss] Trapped Inodes
Sent by: gpfsug-discuss-bounces at spectrumscale.org<mailto:gpfsug-discuss-bounces at spectrumscale.org>
________________________________
Hi All,
I've run out of inodes on a relatively small filesystem. The total metadata capacity allows for a maximum of 188,743,680 inodes.
A fileset containing 158,000,000 inodes was force deleted and has gone into a bad state, where it is reported as (NULL) and has state "deleted":
Attributes for fileset (NULL):
===============================
Status Deleted
Path --
Id 15
Root inode latest:
Parent Id <none>
Created Wed Jun 15 14:07:51 2016
Comment
Inode space 8
Maximum number of inodes 158000000
Allocated inodes 158000000
Permission change flag chmodAndSetacl
afm-associated No
Offline mmfsck fixed a few problems, but didn't free these poor, trapped inodes. Now I've run out and mmdf is telling me crazy things like this:
Inode Information
-----------------
Total number of used inodes in all Inode spaces: 0
Total number of free inodes in all Inode spaces: 27895680
Total number of allocated inodes in all Inode spaces: 27895680
Total of Maximum number of inodes in all Inode spaces: 34100000
Current GPFS build: "4.2.0.3".
Who will help me rescue these inodes?
Cheers,
Luke.
Luke Raimbach
Senior HPC Data and Storage Systems Engineer,
The Francis Crick Institute,
Gibbs Building,
215 Euston Road,
London NW1 2BE.
E: luke.raimbach at crick.ac.uk<mailto:luke.raimbach at crick.ac.uk>
W: www.crick.ac.uk
The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE.
_______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE. _______________________________________________
gpfsug-discuss mailing list
gpfsug-discuss at spectrumscale.org
http://gpfsug.org/mailman/listinfo/gpfsug-discuss
The Francis Crick Institute Limited is a registered charity in England and Wales no. 1140062 and a company registered in England and Wales no. 06885462, with its registered office at 215 Euston Road, London NW1 2BE.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://gpfsug.org/pipermail/gpfsug-discuss_gpfsug.org/attachments/20160703/1bb7252e/attachment-0001.htm>
More information about the gpfsug-discuss
mailing list