Thanks Zdenek for the quick reply.
We are using thin volumes and thin snapshot, but we need all or most snapshots active. Therefore we are enabling the snapshot by-default.
As you suggested we can have a workaround to mount those snapshot volumes by using 'nouuid' option.
But the problem is in most of our use case the origin volume is mounted using the /etc/fstab. And here the mount entry is made using UUID.
So in some cases instead of Origin volume the snapshot volume get mounted.
Sent: Thursday, June 5, 2014 2:49:51 PM
Subject: Re: [linux-lvm] File-system uuid on LVM snapshot
Post by Rajesh JosephHi all,
Origin volume and snapshot volume share the same file-system UUID. So after
the snapshot we fix the uuid by running xfs_admin or tune2fs.
Do you have any recommendation or best practice in this regard?
With thin pools and thin volumes - snapshot of thin volume is now created
'inactive' and it's skipped from default activation (you could always
override skip with -K i.e. : lvchange -ay -K vg/mythinsnap)
So with thins you should mostly have only a single volume active with the FS
UUID. If you happen to have multiple volumes active and you need to mount xfs
filesystem - use 'nouuid' (and eventually norecovery for read-only
activated snapshots) mount options.
For old-snapshosts - all volumes need to be available/active - so you need to
use 'nouuid' option always.
I don't see much point in changing FS UUID on your snapshot - unless of
course you plan to use snapshots as different volumes with just a 'single'
starting point (i.e. preinstalled tree of files)
Regards
Zdenek
_______________________________________________
linux-lvm mailing list
https://www.redhat.com/mailman/listinfo/linux-lvm
read the LVM HOW-TO at http://tldp.org/HOWTO/LVM-HOWTO/