Hi
Since the beginning, I've had issues with regression testing. Despite the fact it's very useful, it takes forever, it's easy to make a mistake (especially during "reverse regression testing"), users find it too long and technical, and only a small minority of regressions are ever bisected. And several patches need backporting to allow older versions of Wine to compile and run on today's make, gcc, and libraries - this is the case even for the 1.0.x releases from less than 3 years ago!
The problem is of course compilation. "configure" takes at least 40 seconds, without any way to speed it up on multi-core CPUs. "make" takes > 5 minutes, and it's only taking longer as Wine gets bigger. Compilation is fundamentally complex and technical to users.
But what if we had precompiled binaries, and regression testing consisted of just running different versions of Wine?
Wine binaries take up about 122 MB and take over 5 minutes to compile. There's now 35770 commits between 36def4af0ca85a1d0e66b5207056775bcb3b09ff (Release 1.0) and "origin". That's about 4.4 terrabytes of storage and over 4 months of compilation, if each of those versions had to be compiled and installed into its own prefix, way beyond what most users are willing or able to store or do. Most patches however end up affecting only a few binary files in the end, and compiling successive versions allows "make" to be very quick.
So I've written a tool that compiles Wine and adds each commit's binaries into a Git repository. It knows how to compile old versions of Wine (currently as far back as 1.0). It knows that commits affecting only ANNOUNCE, .gitignore, and files in dll/ or programs/ ending with .c and such don't need to go through the endlessly slow "configure", only "make". It is stateless: if interrupted, it can resume from the last successful commit. It works around bugs in GNU make (you won't believe how many there are...).
This tool compiled all 35000 or so commits from Wine 1.0 to around 4th October 2011 in only 7 days, generating a Git repository of Wine binaries that's only 26 gigabytes in size. Regression testing with binaries is a pleasure: it takes only a few seconds :-) on each bisection. I bisected a 16 step regression in just 20 minutes, and most of that time was spent running the application and dealing with 2 X-server crashes.
I haven't figured out how to make the binaries available to users. Few users can clone a 26 gigabyte repository, and even fewer places can serve that much to multiple users. Maybe Git can compress it further? The other idea I had is that users should be able to regression test through a GUI tool. Maybe the GUI tool can just download and run the +/- 122 MB binary snapshots for specific commits, instead of having the entire binary repository locally?
Any other ideas? Would you like to see this tool? Can I send an attachment with it?
Thank you Damjan Jovanovic
On Tue, Oct 18, 2011 at 10:45 AM, Damjan Jovanovic damjan.jov@gmail.comwrote:
Hi
Since the beginning, I've had issues with regression testing. Despite the fact it's very useful, it takes forever, it's easy to make a mistake (especially during "reverse regression testing"), users find it too long and technical, and only a small minority of regressions are ever bisected. And several patches need backporting to allow older versions of Wine to compile and run on today's make, gcc, and libraries - this is the case even for the 1.0.x releases from less than 3 years ago!
The problem is of course compilation. "configure" takes at least 40 seconds, without any way to speed it up on multi-core CPUs. "make" takes > 5 minutes, and it's only taking longer as Wine gets bigger. Compilation is fundamentally complex and technical to users.
But what if we had precompiled binaries, and regression testing consisted of just running different versions of Wine?
Wine binaries take up about 122 MB and take over 5 minutes to compile. There's now 35770 commits between 36def4af0ca85a1d0e66b5207056775bcb3b09ff (Release 1.0) and "origin". That's about 4.4 terrabytes of storage and over 4 months of compilation, if each of those versions had to be compiled and installed into its own prefix, way beyond what most users are willing or able to store or do. Most patches however end up affecting only a few binary files in the end, and compiling successive versions allows "make" to be very quick.
So I've written a tool that compiles Wine and adds each commit's binaries into a Git repository. It knows how to compile old versions of Wine (currently as far back as 1.0). It knows that commits affecting only ANNOUNCE, .gitignore, and files in dll/ or programs/ ending with .c and such don't need to go through the endlessly slow "configure", only "make". It is stateless: if interrupted, it can resume from the last successful commit. It works around bugs in GNU make (you won't believe how many there are...).
This tool compiled all 35000 or so commits from Wine 1.0 to around 4th October 2011 in only 7 days, generating a Git repository of Wine binaries that's only 26 gigabytes in size. Regression testing with binaries is a pleasure: it takes only a few seconds :-) on each bisection. I bisected a 16 step regression in just 20 minutes, and most of that time was spent running the application and dealing with 2 X-server crashes.
I haven't figured out how to make the binaries available to users. Few users can clone a 26 gigabyte repository, and even fewer places can serve that much to multiple users. Maybe Git can compress it further? The other idea I had is that users should be able to regression test through a GUI tool. Maybe the GUI tool can just download and run the +/- 122 MB binary snapshots for specific commits, instead of having the entire binary repository locally?
Any other ideas? Would you like to see this tool? Can I send an attachment with it?
Thank you Damjan Jovanovic
Hi,
while I agree that 26 GB is a lot for most people, it would be no problem for me. Long compilation times were exactly the most important factor that kept me away from testing regressions so far. And I think I am not the only one on this side, so even if the repository is that huge, I think it might still worth to grant at least a few people the possibility to clone it.
If I could arrange with my VPS provider to buy more disk space, I would even be willing to hold a mirror of it online.
I would also definiately like to see how it is all achieved.
Regards, Ferenc Gergely Szilagyi
On 18 October 2011 10:45, Damjan Jovanovic damjan.jov@gmail.com wrote:
(especially during "reverse regression testing"), users find it too long and technical, and only a small minority of regressions are ever bisected. And
Not true. Even for the regressions that are still open it's currently 276 bisected vs. 99 not bisected, and Alexandre said about 90% of opened regressions get fixed.
I haven't figured out how to make the binaries available to users. Few users can clone a 26 gigabyte repository, and even fewer places can serve that much to multiple users. Maybe Git can compress it further? The other idea I had is that users should be able to regression test through a GUI tool. Maybe the GUI tool can just download and run the +/- 122 MB binary snapshots for specific commits, instead of having the entire binary repository locally?
Any other ideas? Would you like to see this tool? Can I send an attachment with it?
Scripts for making old versions compile may be useful, but for the most part it sounds like you're essentially duplicating ccache.
On Tue, Oct 18, 2011 at 12:08 PM, Henri Verbeet hverbeet@gmail.com wrote:
On 18 October 2011 10:45, Damjan Jovanovic damjan.jov@gmail.com wrote:
(especially during "reverse regression testing"), users find it too long
and
technical, and only a small minority of regressions are ever bisected.
And Not true. Even for the regressions that are still open it's currently 276 bisected vs. 99 not bisected, and Alexandre said about 90% of opened regressions get fixed.
There's currently another 182 regressions that were closed "ABANDONED". Maybe if regression testing was easier and faster, people wouldn't abandon them?
I haven't figured out how to make the binaries available to users. Few
users
can clone a 26 gigabyte repository, and even fewer places can serve that much to multiple users. Maybe Git can compress it further? The other idea
I
had is that users should be able to regression test through a GUI tool. Maybe the GUI tool can just download and run the +/- 122 MB binary
snapshots
for specific commits, instead of having the entire binary repository locally?
Any other ideas? Would you like to see this tool? Can I send an
attachment
with it?
Scripts for making old versions compile may be useful, but for the most part it sounds like you're essentially duplicating ccache.
If you are talking about using compiling with ccache instead of the binary repository, "configure" alone is > 40 seconds, while the average "git bisect" on the binary repository is about 4. If you are talking about using ccache to speed up building the binary repository commit by commit, why, when for most commits "make" takes about 5 seconds and skips all unnecessary compilation anyway?
The other advantage I see is the convenience: instead of having to wait or take breaks while each commit compiles like in normal regression testing, you can have the tool compile you a small set of revisions (eg. the last month) while you are away from your computer, then just quickly test them all in one go when you come back. And you can regression test several applications without repeatedly compiling Wine.
On 18 October 2011 13:42, Damjan Jovanovic damjan.jov@gmail.com wrote:
There's currently another 182 regressions that were closed "ABANDONED". Maybe if regression testing was easier and faster, people wouldn't abandon them?
Maybe. That's 182 closed ABANDONED, out of 2590 total closed, so that's on the order of 5-10%. Perhaps it's possible to bring that down by making it easier to do bisects, but it's hardly the case that "only a small minority of regressions are ever bisected".
If you are talking about using compiling with ccache instead of the binary repository, "configure" alone is > 40 seconds, while the average "git bisect" on the binary repository is about 4. If you are talking about using ccache to speed up building the binary repository commit by commit, why, when for most commits "make" takes about 5 seconds and skips all unnecessary compilation anyway?
The other advantage I see is the convenience: instead of having to wait or take breaks while each commit compiles like in normal regression testing, you can have the tool compile you a small set of revisions (eg. the last month) while you are away from your computer, then just quickly test them all in one go when you come back. And you can regression test several applications without repeatedly compiling Wine.
Regardless of whether this has much advantages over e.g. ccache, it does strike me that this would mostly be useful for people that do a lot of regression testing already. If it allows those people to do regressions tests faster and find more regressions or find them sooner that way that's great, but these aren't the same people that end up abandoning regressions. The problems there are typically more on the level of "What is git?" / "How do I build Wine?" / "Where do I get all these dependencies?", and typically someone would only do one or two bisects.
On Tue, Oct 18, 2011 at 13:42, Damjan Jovanovic damjan.jov@gmail.com wrote:
If you are talking about using compiling with ccache instead of the binary repository, "configure" alone is > 40 seconds
configure -C option can speed it up a lot
Henri Verbeet hverbeet@gmail.com wrote:
On 18 October 2011 10:45, Damjan Jovanovic damjan.jov@gmail.com wrote:
(especially during "reverse regression testing"), users find it too long and technical, and only a small minority of regressions are ever bisected. And
Not true. Even for the regressions that are still open it's currently 276 bisected vs. 99 not bisected, and Alexandre said about 90% of opened regressions get fixed.
Moreover, often users get asked 'does reverting commit xxxx' help? Without performing a proper regression test it's impossible to asnwer that question.
On Tue, Oct 18, 2011 at 2:32 PM, Dmitry Timoshkov dmitry@baikal.ru wrote:
Henri Verbeet hverbeet@gmail.com wrote:
On 18 October 2011 10:45, Damjan Jovanovic damjan.jov@gmail.com wrote:
(especially during "reverse regression testing"), users find it too
long and
technical, and only a small minority of regressions are ever bisected.
And
Not true. Even for the regressions that are still open it's currently 276 bisected vs. 99 not bisected, and Alexandre said about 90% of opened regressions get fixed.
Moreover, often users get asked 'does reverting commit xxxx' help? Without performing a proper regression test it's impossible to asnwer that question.
Reverting a commit in the latest git is just 1 round of patch+configure+make+run, and reverting to the commit before it in the binary repository is just one git command. Why would you need a "proper regression test"?
Damjan Jovanovic damjan.jov@gmail.com wrote:
Moreover, often users get asked 'does reverting commit xxxx' help? Without performing a proper regression test it's impossible to asnwer that question.
Reverting a commit in the latest git is just 1 round of patch+configure+make+run, and reverting to the commit before it in the binary repository is just one git command. Why would you need a "proper regression test"?
Reverting a patch in latest git is not always possible, instead it's a very useful test to revert the patch at the suspected regression point and see if that really helps.
On Tue, Oct 18, 2011 at 09:01, Dmitry Timoshkov dmitry@baikal.ru wrote:
Damjan Jovanovic damjan.jov@gmail.com wrote:
Moreover, often users get asked 'does reverting commit xxxx' help? Without performing a proper regression test it's impossible to asnwer that question.
Reverting a commit in the latest git is just 1 round of patch+configure+make+run, and reverting to the commit before it in the binary repository is just one git command. Why would you need a "proper regression test"?
Reverting a patch in latest git is not always possible, instead it's a very useful test to revert the patch at the suspected regression point and see if that really helps.
That still doesn't require a full regression test, just: $ git checkout -f $SHA1SUM $ ./configure && make -j4 # test $ git show $SHA1SUM | patch -p1 -R $ ./configure && make -j4 # retest
Austin English austinenglish@gmail.com wrote:
Reverting a patch in latest git is not always possible, instead it's a very useful test to revert the patch at the suspected regression point and see if that really helps.
That still doesn't require a full regression test, just: $ git checkout -f $SHA1SUM $ ./configure && make -j4 # test $ git show $SHA1SUM | patch -p1 -R $ ./configure && make -j4 # retest
How do you know that $SHA1SUM without a regression test?
On Tue, Oct 18, 2011 at 10:26, Dmitry Timoshkov dmitry@baikal.ru wrote:
Austin English austinenglish@gmail.com wrote:
Reverting a patch in latest git is not always possible, instead it's a very useful test to revert the patch at the suspected regression point and see if that really helps.
That still doesn't require a full regression test, just: $ git checkout -f $SHA1SUM $ ./configure && make -j4 # test $ git show $SHA1SUM | patch -p1 -R $ ./configure && make -j4 # retest
How do you know that $SHA1SUM without a regression test?
I was referring to the case you pointed out:
Moreover, often users get asked 'does reverting commit xxxx' help? Without performing a proper regression test it's impossible to asnwer that question.
If a developer asks about a specific commit, then of course you know which one to try :).
Otherwise, of course a full regression test is required.
Exciting!
On 10/18/2011 01:45 AM, Damjan Jovanovic wrote:
I haven't figured out how to make the binaries available to users. Few users can clone a 26 gigabyte repository, and even fewer places can serve that much to multiple users. Maybe Git can compress it further? The other idea I had is that users should be able to regression test through a GUI tool. Maybe the GUI tool can just download and run the +/- 122 MB binary snapshots for specific commits, instead of having the entire binary repository locally?
Any other ideas? Would you like to see this tool? Can I send an attachment with it?
Perhaps you could use an intermediary server and a script. The user tells the script "works" or "doesn't" and then the script fetches the binary via rsync from a special directory on the server that has the git repo there. That way the user only needs to download the binaries he runs, and even then they'll be done incrementally via rsync magic.
Thanks, Scott Ritchie
Am 18.10.2011 10:45, schrieb Damjan Jovanovic:
This tool compiled all 35000 or so commits from Wine 1.0 to around 4th October 2011 in only 7 days, generating a Git repository of Wine binaries that's only 26 gigabytes in size. Regression testing with binaries is a pleasure: it takes only a few seconds :-) on each bisection. I bisected a 16 step regression in just 20 minutes, and most of that time was spent running the application and dealing with 2 X-server crashes.
I already love it.
I haven't figured out how to make the binaries available to users. Few users can clone a 26 gigabyte repository, and even fewer places can serve that much to multiple users. Maybe Git can compress it further? The other idea I had is that users should be able to regression test through a GUI tool. Maybe the GUI tool can just download and run the +/- 122 MB binary snapshots for specific commits, instead of having the entire binary repository locally?
tried compressing the .git directory. or maybe "git gc" can help
Would you like to see this tool? Can I send an attachment with it?
If the tool is not too big and is textual, why not
2011/10/18 André Hentschel nerv@dawncrow.de
Am 18.10.2011 10:45, schrieb Damjan Jovanovic:
This tool compiled all 35000 or so commits from Wine 1.0 to around 4th
October 2011 in only 7 days, generating a Git repository of Wine binaries that's only 26 gigabytes in size. Regression testing with binaries is a pleasure: it takes only a few seconds :-) on each bisection. I bisected a 16 step regression in just 20 minutes, and most of that time was spent running the application and dealing with 2 X-server crashes.
I already love it.
Thank you :)
I haven't figured out how to make the binaries available to users. Few
users can clone a 26 gigabyte repository, and even fewer places can serve that much to multiple users. Maybe Git can compress it further? The other idea I had is that users should be able to regression test through a GUI tool. Maybe the GUI tool can just download and run the +/- 122 MB binary snapshots for specific commits, instead of having the entire binary repository locally?
tried compressing the .git directory. or maybe "git gc" can help
Would you like to see this tool? Can I send an attachment with it?
If the tool is not too big and is textual, why not
I've opened a Google Code project: http://raisinrefinery.googlecode.com
No releases yet, but you can: svn co http://raisinrefinery.googlecode.com/svn/trunk/RaisinRefinery cd RaisinRefinery mvn package java -jar RaisinRefinery/target/RetroRaisinCellar-0.1-SNAPSHOT.jar 8a7bc4c727fe6330efdfbbb24e6666ee39364c0d..origin /path/to/wine-git /path/to/binary-repository to compile the last 5 commits in today's Git.
2011/10/18 André Hentschel nerv@dawncrow.de
Am 18.10.2011 10:45, schrieb Damjan Jovanovic:
This tool compiled all 35000 or so commits from Wine 1.0 to around 4th
October 2011 in only 7 days, generating a Git repository of Wine binaries that's only 26 gigabytes in size. Regression testing with binaries is a pleasure: it takes only a few seconds :-) on each bisection. I bisected a 16 step regression in just 20 minutes, and most of that time was spent running the application and dealing with 2 X-server crashes.
I already love it.
I haven't figured out how to make the binaries available to users. Few
users can clone a 26 gigabyte repository, and even fewer places can serve that much to multiple users. Maybe Git can compress it further? The other idea I had is that users should be able to regression test through a GUI tool. Maybe the GUI tool can just download and run the +/- 122 MB binary snapshots for specific commits, instead of having the entire binary repository locally?
tried compressing the .git directory. or maybe "git gc" can help
Thank you, "git gc" reduces it from 26 GB to only 1.5 GB :-). tar.lzma on top of that doesn't help.
Now the next question is, how to get the binaries to run on any distro? Or should I just compile on Ubuntu because most people run that (do they still, after Unity?)?
Thank you Damjan
On Tue, 25 Oct 2011, Damjan Jovanovic wrote: [...]
Now the next question is, how to get the binaries to run on any distro? Or should I just compile on Ubuntu because most people run that (do they still, after Unity?)?
Compile on Debian Stable or even Debian OldStable, taking care to still make sure you have all the important dependencies, sometimes adding some backported packages. Typically you'd do this in a chroot. Then it will run on all relevant Linux distributions.
Alternatively, have you considered doing a .tar.gz of every build snapshot, and placing that on a server somewhere? e.g. a folder full of36def4af0ca85a1d0e66b5207056775bcb3b09ff.tar.gz files? Then one could write a simple wine regression bisect tool that implements similar semantics to git bisect, but would essentially wrap wget. Then in your server you could have an index file which is a list of the sha commit ids. This would save the user having to clone a 26Gb repository when most of the commits will be irrelevant. Extra bonus points for doing a better job of compressing the small deltas between binaries*, rather than compressing full wine builds. Joel * Are binaries deterministic like this? or do they tend to be completely scrambled?
On 18 October 2011 at 09:45 Damjan Jovanovic damjan.jov@gmail.com wrote:
Hi
Since the beginning, I've had issues with regression testing. Despite the fact it's very useful, it takes forever, it's easy to make a mistake (especially during "reverse regression testing"), users find it too long and technical, and only a small minority of regressions are ever bisected. And several patches need backporting to allow older versions of Wine to compile and run on today's make, gcc, and libraries - this is the case even for the 1.0.x releases from less than 3 years ago!
The problem is of course compilation. "configure" takes at least 40 seconds, without any way to speed it up on multi-core CPUs. "make" takes > 5 minutes, and it's only taking longer as Wine gets bigger. Compilation is fundamentally complex and technical to users.
But what if we had precompiled binaries, and regression testing consisted of just running different versions of Wine?
Wine binaries take up about 122 MB and take over 5 minutes to compile. There's now 35770 commits between 36def4af0ca85a1d0e66b5207056775bcb3b09ff (Release 1.0) and "origin". That's about 4.4 terrabytes of storage and over 4 months of compilation, if each of those versions had to be compiled and installed into its own prefix, way beyond what most users are willing or able to store or do. Most patches however end up affecting only a few binary files in the end, and compiling successive versions allows "make" to be very quick.
So I've written a tool that compiles Wine and adds each commit's binaries into a Git repository. It knows how to compile old versions of Wine (currently as far back as 1.0). It knows that commits affecting only ANNOUNCE, .gitignore, and files in dll/ or programs/ ending with .c and such don't need to go through the endlessly slow "configure", only "make". It is stateless: if interrupted, it can resume from the last successful commit. It works around bugs in GNU make (you won't believe how many there are...).
This tool compiled all 35000 or so commits from Wine 1.0 to around 4th October 2011 in only 7 days, generating a Git repository of Wine binaries that's only 26 gigabytes in size. Regression testing with binaries is a pleasure: it takes only a few seconds :-) on each bisection. I bisected a 16 step regression in just 20 minutes, and most of that time was spent running the application and dealing with 2 X-server crashes.
I haven't figured out how to make the binaries available to users. Few users can clone a 26 gigabyte repository, and even fewer places can serve that much to multiple users. Maybe Git can compress it further? The other idea I had is that users should be able to regression test through a GUI tool. Maybe the GUI tool can just download and run the +/- 122 MB binary snapshots for specific commits, instead of having the entire binary repository locally?
Any other ideas? Would you like to see this tool? Can I send an attachment with it?
Thank you Damjan Jovanovic
On Wed, Oct 19, 2011 at 14:08, Joel Holdsworth joel@airwebreathe.org.uk wrote:
Alternatively, have you considered doing a .tar.gz of every build snapshot, and placing that on a server somewhere?
e.g. a folder full of 36def4af0ca85a1d0e66b5207056775bcb3b09ff.tar.gz files?
tar.xz would compress better
Then one could write a simple wine regression bisect tool that implements similar semantics to git bisect, but would essentially wrap wget. Then in your server you could have an index file which is a list of the sha commit ids.
This would save the user having to clone a 26Gb repository when most of the commits will be irrelevant.
Cloning a multi-gig repository is a no-go for many (most?) people, especially for a regression testing they might do only once or twice...
Extra bonus points for doing a better job of compressing the small deltas between binaries*, rather than compressing full wine builds.
Maybe you could use stuff like xdelta or bsdiff, but then you may have some issues IMO: - (not sure) you should use them on non-compressed files (e.g. .tar) to get small diffs - the total size of individual diffs to download could exceed a single, full, download (especially for old regressions)
To make regression faster/easier, we could (in a script of some sort) - phase 1: detect the "release range failure" (failed between wine-1.X.N and wine-1.X.N+1) using only release binaries [or instruct people to do that first using their distrib packages, like RegressionTesting does IIRC] - phase 2: perform bisect between these two releases
Just my 2 ¢
Frédéric
On 19/10/11 13:43, Frédéric Delanoy wrote:
On Wed, Oct 19, 2011 at 14:08, Joel Holdsworthjoel@airwebreathe.org.uk wrote:
Alternatively, have you considered doing a .tar.gz of every build snapshot, and placing that on a server somewhere?
e.g. a folder full of 36def4af0ca85a1d0e66b5207056775bcb3b09ff.tar.gz files?
tar.xz would compress better
tar.lzma?
On Wed, Oct 19, 2011 at 02:42:29PM +0100, Ken Sharp wrote:
On 19/10/11 13:43, Frédéric Delanoy wrote:
On Wed, Oct 19, 2011 at 14:08, Joel Holdsworthjoel@airwebreathe.org.uk wrote:
Alternatively, have you considered doing a .tar.gz of every build snapshot, and placing that on a server somewhere?
e.g. a folder full of 36def4af0ca85a1d0e66b5207056775bcb3b09ff.tar.gz files?
tar.xz would compress better
tar.lzma?
Having tars of all builds would be way larger I guess. GIT compresses and shares objects that are the same.
Ciao, Marcus
On Wed, Oct 19, 2011 at 15:50, Marcus Meissner meissner@suse.de wrote:
On Wed, Oct 19, 2011 at 02:42:29PM +0100, Ken Sharp wrote:
On 19/10/11 13:43, Frédéric Delanoy wrote:
On Wed, Oct 19, 2011 at 14:08, Joel Holdsworthjoel@airwebreathe.org.uk wrote:
Alternatively, have you considered doing a .tar.gz of every build snapshot, and placing that on a server somewhere?
e.g. a folder full of 36def4af0ca85a1d0e66b5207056775bcb3b09ff.tar.gz files?
tar.xz would compress better
tar.lzma?
Having tars of all builds would be way larger I guess. GIT compresses and shares objects that are the same.
Ciao, Marcus
You're talking about using a git tree just to store binaries for each committed patch, I suppose? But then you would have to download the whole repository (which can be quite big) to get compression benefits, right?
On Wed, Oct 19, 2011 at 04:18:50PM +0200, Frédéric Delanoy wrote:
On Wed, Oct 19, 2011 at 15:50, Marcus Meissner meissner@suse.de wrote:
On Wed, Oct 19, 2011 at 02:42:29PM +0100, Ken Sharp wrote:
On 19/10/11 13:43, Frédéric Delanoy wrote:
On Wed, Oct 19, 2011 at 14:08, Joel Holdsworthjoel@airwebreathe.org.uk wrote:
Alternatively, have you considered doing a .tar.gz of every build snapshot, and placing that on a server somewhere?
e.g. a folder full of 36def4af0ca85a1d0e66b5207056775bcb3b09ff.tar.gz files?
tar.xz would compress better
tar.lzma?
Having tars of all builds would be way larger I guess. GIT compresses and shares objects that are the same.
Ciao, Marcus
You're talking about using a git tree just to store binaries for each committed patch, I suppose? But then you would have to download the whole repository (which can be quite big) to get compression benefits, right?
True, yes.
Ciao, Marcus