Replies: 20 comments 3 replies
-
As far as I can see this is expected behaviour. A clone does not duplicate any data (that's the whole point), and so it is dependent on the original dataset.
The first sentence here does read somewhat as if the dependency is gone. However if you continue reading it very clearly states that the original dataset is basically now the clone. You still only have one copy of the data on disk, so your options for deleting come down to a. delete the clone or b. delete the current "primary" dataset and destroy both in the process. If you plan to use the clone ( |
Beta Was this translation helpful? Give feedback.
-
Hello Ahrens, Thanks for your answer. I understand what you say and what is written seems to be apparently the case. To me, this then becomes an issue of terms. Taking the example of BTRfs, if I create a snapshot of a dataset, this dataset has its own life, can be modified, destroyed, whatever. There is no snapshot hierarchy either.. The clone counts the difference with its "master" as disk space, no more. With ZFS, creating a real clone seems then not possible but at the price of zfs send|zfs receive. That is the purpose I want to achieve is the capability for students to work on their own directory tree with the capability of rollback based on versioning (then it should be possible to rollback at any version level without loosing snapshot done later in time), based from an original containing the exercice. Once the student finished, he can keep its clone for later use or the clone is destroyed to free space. If that is not a bug, maybe this could become a dev request ? Thanks Brgrds |
Beta Was this translation helpful? Give feedback.
-
[quote]With ZFS, creating a real clone seems then not possible but at the price of zfs send|zfs receive.[/quote] I'm confused at what you mean by a "real" clone. The point of clone is to create two separate identical datasets without having to duplicate any data. The clone takes no space other than new data written to it. If needed, you can delete the clone with no issue. Now of course, if you try to delete the original dataset, you can't as it has dependent clones. The clone only stores its differences and is relying on the data in that original dataset. If you really want to scrap the original but keep the clone, you need to promote the clone so it becomes the "master". I can't see how this could possibly work any other way without having two full copies (i.e. send/recv). I know nothing of BTRFS but I can't see how you could snapshot/clone a dataset with zero data copy, then delete that original dataset. One possible "improvement" here could be the ability to destroy a master and have the data automatically promoted to the oldest clone. I would probably still suggest ZFS having a warning for this though with For your teaching example you just need to have a base dataset containing the exercise. Clone it for each student but do not run promote on any of them. Each student can work on their own clone and make their own snapshots, and all the clones can be removed if not needed. They will all only take the space required by changes. The only caveat is that rolling back on ZFS removes the intermediate snapshots, so you can't rollback to a month ago, then go forward again. This isn't an issue of cloning, but ZFS in general. (You can of course however look in any snapshot to retrieve data from that point in time) |
Beta Was this translation helpful? Give feedback.
-
Hello Matt, Agree, there is a confusion, probably coming from me. Moving backwards, I sense this might come because I played with BTRfs which was simpler to achieve my needs and probably because I am far from mastering ZFS. With BTRfs, a snapshot have its own life and is mountable. It relies on its master of course, but since a snapshot is built, that is call. So I snap the dir to another mount point and then student does whatever. At the end, I destroy snap, no more. On ZFS, a snapshot is not a mountpoint but if I deliberately mount it, which has a mountpoint cost. To become a directory, it must be zfs cloned. I understand a 'static' image is required for the clone and so the snapshot is required for this. What if there is no snapshot ? The original could keep evolving, ok. But if there was no snapshot, would it make a difference ? Some block would change, clone will reflect that, ok. If that is not wished, then a snapshot remains possible. |
Beta Was this translation helpful? Give feedback.
-
About "They will all only take the space required by changes. The only caveat is that rolling back on ZFS removes the intermediate snapshots, so you can't rollback to a month ago, then go forward again. This isn't an issue of cloning, but ZFS in general. (You can of course however look in any snapshot to retrieve data from that point in time)" Agree, that is a real concern.. Since I have this strong need and mounting to copy is not acceptable (due to the delay on big amount of data), I may have found a workaround playing on snapshot+clone. My idea I just got (still to test & check) would then be to have a clone per version. To be tested :-) |
Beta Was this translation helpful? Give feedback.
-
To 'unpromote' a clone you can just promote the original and you can delete the clone after that. In your example:
This will do the trick. You can also keep just MYNEWdataset:
|
Beta Was this translation helpful? Give feedback.
-
Ookkk, thanks Ivan, I am currently moving data from BTRfs back to ZFS, I will then see how far can I go to reach the same situation as before (in terms of flexibiliy I mean), because BTRfs, in terms of performance, is really bad, ZFS is far beyond... |
Beta Was this translation helpful? Give feedback.
-
Hello guys, I am taking the opportunity of skilled ZFS fellows around to be sure. To reach again the rolling back feature WITH versionning (ZFS destroy earlier snapshots), I wonder if using zfs clone could do. My idea: So the idea would then be NOT to restore T1 but to zfs clone T1 as new dataset in the previous location/mountpoint. HOWEVER, I wonder if after a while, multiple rollbacks will not finally affect the consistency of the ZFS dataset. Feedback welcome. Thanks |
Beta Was this translation helpful? Give feedback.
-
Digging a bit, I realize ZFS might be similar to BTRfs about snapshot and might be able to provide the same service. ZFS stores its snapshots into MYdataset/.zfs/snapshot/[name]. Now, it is possible to snapshot the snapshot ? ;-) Feedback still welcome :-) |
Beta Was this translation helpful? Give feedback.
-
You can zfs set readonly=on and snapshot twice - you will get two copies of
snapshot. You cannot snapshot the content of .zfs/snapshot/blah.
I used clones quite a bit. You can make a tree of clones if you want. The
tricky part with clones is to do incremental sends to preserve the tree
structure of clones. What I did is I made clones to be children of the
original. This way I could replicate the structure using 'zfs send -R' and
after that sent incremental updates to all clones independently. Not sure
how I can add more clones in this setup though.
|
Beta Was this translation helpful? Give feedback.
-
Hello Ivan, Thanks for your answer. In fact, I sense the blocking point is precisely the fact snapshots and thus clones are always dependant from a "master". To satisfy the need, solution was simple When I needed to put S_T1 in place of the original /mydir, it was simple: From this, since /mydir was a snapshot accessible as a directory, it was used for the need. With this same logic, I was able to have versioning...into clones ! Maybe that is a bit confusing as expressed, I try to focus on how to satisfy the need rather than the way BTRfs or ZFS behaves from my understanding :-( |
Beta Was this translation helpful? Give feedback.
-
The part you are missing is that the clone is a new dataset. So rather than
trying to replace /mydir
with a child of itself. Just use the child.
-- richard
…On Thu, Dec 17, 2020 at 5:59 AM B3r3n06 ***@***.***> wrote:
Hello Ivan,
Thanks for your answer.
In fact, I sense the blocking point is precisely the fact snapshots and
thus clones are always dependant from a "master".
I got the same situation in BTRfs, but there snapshots were the same type
of objects as the master was, and mountable wherever wanted.
To satisfy the need, solution was simple
/mydir -> snapshot S_T0 -> /snaphots/S_T0
/mydir -> snapshot S_T1 -> /snaphots/S_T1
/mydir -> snapshot S_T2 -> /snaphots/S_T2
When I needed to put S_T1 in place of the original /mydir, it was simple:
/snapshots/S_T1 -> snapshot mydir ->/mydir
simple as this because there is no link between mountpoint and volume name.
From this, since /mydir was a snapshot accessible as a directory, it was
used for the need.
With this same logic, I was able to have versioning...into clones !
That is S_T1 -> /myCLONEdir
Maybe that is a bit confusing as expressed, I try to focus on how to
satisfy the need rather than the way BTRfs or ZFS behaves from my
understanding :-(
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<https://github.com/openzfs/zfs/issues/11316#issuecomment-747455031>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAGTZTORZDNU7W3ACBILINLSVIFEHANCNFSM4UTP3POQ>
.
|
Beta Was this translation helpful? Give feedback.
-
Hello Richard, I got you. However, there are constraints to comply with :-) Example, mydir must be kept but ok, a mountpoint could do the trick. Also there is a hierarchical dependancy : /MYpool/MYdataset/MYsubdataset -> MYsubdataset being the clone, must be into the MYsubdataset subdirectory of MYdataset directory :-) I know, the need is quite complex :-) Brgrds |
Beta Was this translation helpful? Give feedback.
-
I kinda lost the context if you still talking about ZFS or BTRFS. In ZFS a clone can be anywhere in the hierarchy of the pool not necessary a child dataset of a primary dataset and you can reverse child / parent relationship with 'zfs promote' without moving/renaming any datasets. |
Beta Was this translation helpful? Give feedback.
-
Hello Ivan, That is precisely my point : in real life, a clone is an independant living no ? I can understand it starts being dependant of a master, but it should be releasable. For example, zfs promote would make it independant from the original master... That is the point of the whole example. Promote is not really a promotion since there is still some associated data... I dont have such issue with BTRfs, I was hoping there would be some method but apparently not. It means to me, upon a nested zfs dataset project directory tree, I will probably not be able to have multiple clones of a subdataset. And capability to have any version of dataset for projectA, object_dir1 or 2... |
Beta Was this translation helpful? Give feedback.
-
that's what unionfs/overlayfs are for. |
Beta Was this translation helpful? Give feedback.
-
On Dec 23, 2020, at 11:09 PM, B3r3n06 ***@***.***> wrote:
Hello Ivan,
That is precisely my point : in real life, a clone is an independant living no ?
In the computing world, that is known as a copy. There is no biological equivalent to ZFS clones.
…-- richard
|
Beta Was this translation helpful? Give feedback.
-
Hello guys, Merry Xmas & thanks for your answers. @IvanVolosyuk : agree. Seems BTRfs is behaving internally a different way. This permits to, in fact, use snapshots are a fully operational copy, inpendant from its parent. Really usefull feature but in a awfully slow filesystem :-(. @misterbigstuff: THANKS ! I know UnionFS to use is in FreeBSD but did not remember it. Seems it exists on Linux. If it behaves the same in such a situation of big files, this might be the answer : instead of a zfs snapshot/clone, I could union mount above the original tree, than do whatever I want to, that is just mount points. Limit will then be the max number of simultaneous mounts... @richardelling & @misterbigstuff : agree, ZFS clones are not clones in real. Looks more like some sort of aura that can live outside the real body without being independant of it. Cut the spiritual link and aura dies... Maybe same spirit in James Cameron's Avatar :-) I'll test UnionFS to see if that might solve my need ! |
Beta Was this translation helpful? Give feedback.
-
Hello all, @misterbigstuff : a unionfs does not seem to make it. I tested with aufs, upon a directory renaming or file update, the command is looong because the entire object data (either directory or file) is copied. So it becomes the same as duplicating data... |
Beta Was this translation helpful? Give feedback.
-
ZFS and BTRFS have different design and different focus. BTRFS has focus on flexibility, while ZFS focus on stability, and ease of administration.
In this example
I can see that I can get 341M back if I delete the
You can think of ZFS clones as BTRFS writable snapshots where space management is more explicit. Bottom line: ZFS clones are not independent from the base dataset, because of the explicit space management. You can delete base dataset, but you have to explicitly promote a clone to sort out the space management questions first. Once you get the grip on the space management questions you will find out that you can create any clone hierarchy and delete any dataset from it if you want. Thus, you can think of ZFS clones as independent from base datasets, just more explicit in terms of space management. |
Beta Was this translation helpful? Give feedback.
-
Hello,
Issue
ZFS provides a dataset cloning feature.
Technically, the procedure is the below:
zfs create MYdataset
zfs snapshot MYdataset@MYmark
zfs clone MYdataset@MYmark MYNEWdataset
zfs promote MYNEWdataset
From this, MYNEWdataset is supposed totally independant from its origin MYdataset.
However, when the clone becomes useless and to be destroyed, it appears a bug:
zfs destroy MYNEWdataset
cannot destroy 'MYNEWdataset': filesystem has children
use '-r' to destroy the following datasets:
MYNEWdataset@MYmark
zfs destroy -r MYNEWdataset
cannot destroy 'MYNEWdataset': filesystem has dependent clones
use '-R' to destroy the following datasets:
MYdataset
So enforcing the action will delete the original dataset, still used.
From this, there is no escape but destroying original dataset for real and rebuild it.
System information
This happens on ZFS on Ubuntu 18.04.5 and FreeBSD 12.
Versions on Ubuntu:
ZFS version: 0.7.5-1ubuntu16.10
SPL version: 0.7.5-1ubuntu2.2
Beta Was this translation helpful? Give feedback.
All reactions