Hello everybody,
talking to David Gümbel today the idea came up that it might be worth it to introduce an additional section in the comments that document the exported APIs. Something like 'IMPLEMENTATION STATUS', which would document the author's opinion on how complete the implementation of a given API is. We could introduce a classification scheme similar to:
STUBBED: Well, stubbed. SKETCHY: Implemented just enough to make a specific application or a small set of applications happy. SUBSTANTIAL: Implemented a substantial part of the API (Perhaps as much as is documented on MSDN). COMPLETE: The author of this API considers the implementation complete. REVIEWED: The code was reviewed for completeness and correctness.
It would be cool for various reasons, if we could do this in a machine readable way:
1. We could generate the 'Wine DLLs Status' page automatically with every release and we would have a historical record in cvs, which gives another view on the progress we make.
2. We could establish guidelines like "A patch, which touches an API marked REVIEWED can only go into cvs, if reviewed by n people other than the author".
3. Tools like ITOMIG's ganymede, which try to give an estimate on how well an application can be expected to work on wine by doing a static analysis to figure out the used APIs, could apply this information.
Since COM classes, which account for a substantial part of win32, are not directly exposed via the APIs, we probably would have to think about a similar machine parsable documentation scheme here.
What do people think about this?
Bye,
Hi,
On 11/10/05, Michael Jung mjung@iss.tu-darmstadt.de wrote:
Hello everybody,
talking to David Gümbel today the idea came up that it might be worth it to introduce an additional section in the comments that document the exported APIs. Something like 'IMPLEMENTATION STATUS', which would document the author's opinion on how complete the implementation of a given API is. We could introduce a classification scheme similar to:
<snip>
- We could generate the 'Wine DLLs Status' page automatically with every
release and we would have a historical record in cvs, which gives another view on the progress we make.
We have a dynamic page like this for ReactOS though I don't like it because it only supports implemented and unimplemented tags in the function comment. Having a semi-implemented tag would be nice. Alexandre did not like our use of the @implemented and @unimplemented tags at the time because it did not take in to account APIs that were partly implemented like your propose.
http://svn.reactos.com/api/index2.html
It would be really great if the two projects could agree on a system.
Thanks Steven
Hi,
Michael Jung wrote:
Something like 'IMPLEMENTATION STATUS', which would document the author's opinion on how complete the implementation of a given API is. We could introduce a classification scheme similar to:
STUBBED: Well, stubbed. SKETCHY: Implemented just enough to make a specific application or a small set of applications happy. SUBSTANTIAL: Implemented a substantial part of the API (Perhaps as much as is documented on MSDN). COMPLETE: The author of this API considers the implementation complete. REVIEWED: The code was reviewed for completeness and correctness.
Steven Edwards wrote:
We have a dynamic page like this for ReactOS though I don't like it because it only supports implemented and unimplemented tags in the function comment.
I like the states from Michael, and the short @-notation from ReactOS. The implementation status is just a flag, writting a whole section is IMO an overkill. Any agreements on this proposal:
@-notation, 5 states:
@unimplemented (=STUBBED) @skechty @substantial @implemented (=COMPLETE) @reviewed
2 possible usages:
- Header: /****************************************************************************** * LsaFreeMemory [ADVAPI32.@] @implemented */
- on its own line: /****************************************************************************** * LsaLookupNames [ADVAPI32.@] * * @substantial */
The parser would simply look for the keywords (@substantial, @..), false positives are pretty unlikley (not one in the wine source).
Markus
On 11/12/05, Markus Amsler markus.amsler@oribi.org wrote:
@-notation, 5 states:
@unimplemented (=STUBBED) @skechty @substantial @implemented (=COMPLETE) @reviewed
The proposed method is superfluous and adds unnecessary words to the docs and source. You really don't get any extra information by labeling a function as sketchy or substantial. It's dangerous to label a function as reviewed, because it gives the impression that the function is absolutely complete and free of bugs. We have a system right now that works well. The three states are stub, semi-stub, and implemented, though implemented is usually denoted by the absence of stub or semi-stub. A stub means nothing happens at all in the function; semi-stub means that the function does something, usually just enough to appease some apps; implemented obviously means it's implemented. This is also the easiest method in that you would just need to change c2man to parse the FIXMEs at the beginning for the word 'stub' or 'semi-stub'. They are already in most functions, and it would be beneficial to add them to the rest.
-- James Hawkins
Am Samstag, den 12.11.2005, 01:52 +0000 schrieb James Hawkins:
need to change c2man to parse the FIXMEs at the beginning for the word 'stub' or 'semi-stub'. They are already in most functions, and it would be beneficial to add them to the rest.
I have seen most stubs at the end of the format-string: "stub\n" or "stub!\n" "grep -i 'FIXME.*stub' dlls/*/*" | wc -l
3032 'FIXME.*stub' 71 'FIXME.*semi.*stub'
1927 'FIXME.*stub\n' 683 'FIXME.*stub!\n' 360 'FIXME.*"stub'
Hi James,
On Saturday 12 November 2005 02:52, James Hawkins wrote:
The proposed method is superfluous and adds unnecessary words to the docs and source. You really don't get any extra information by labeling a function as sketchy or substantial. It's dangerous to label a function as reviewed, because it gives the impression that the function is absolutely complete and free of bugs.
As Dimi pointed out, 'reviewed' is kind of orthogonal to the other keywords. So it's indeed probably a bad idea to mix it into an 'implementation status' annotation. But I would guess that most hackers won't be fooled into a '100% bug free' blindness by a tag like 'reviewed'.
We have a system right now that works well. The three states are stub, semi-stub, and implemented, though implemented is usually denoted by the absence of stub or semi-stub.
In my opinion it's not a good idea to interpret the absence of stub or semi-stub as 'implemented': I guess very few of wine's APIs would be considered implemented, yet most of them don't have a 'stub' or 'semi-stub' annotation. This would lead to a level of false positives, which would render the result almost meaningless.
A stub means nothing happens at all in the function; semi-stub means that the function does something, usually just enough to appease some apps; implemented obviously means it's implemented. This is also the easiest method in that you would just need to change c2man to parse the FIXMEs at the beginning for the word 'stub' or 'semi-stub'. They are already in most functions, and it would be beneficial to add them to the rest.
The problem with this is that it's not obvious that the presence of '(semi-)stub' in a FIXME macro has an effect on c2man. So it's kind of a hidden side-effect. If we instead put it in a special section in the API-comments, people will recognize it and start to immitate it.
If we can agree on a implementation status tagging scheme, it would be cool if we would write a small tool, which would scan for 'stub' and 'semi-stub' FIXMEs and map those to a 'STUBBED' or respectively 'SKETCHY' implementation status flag. This would give us a head start with the number of API's tagged.
Bye,
Michael Jung mjung@iss.tu-darmstadt.de writes:
In my opinion it's not a good idea to interpret the absence of stub or semi-stub as 'implemented': I guess very few of wine's APIs would be considered implemented, yet most of them don't have a 'stub' or 'semi-stub' annotation. This would lead to a level of false positives, which would render the result almost meaningless.
It's a rough estimate, but it has the advantage of not requiring code changes. Adding special comments all over the place is going to be a lot of work, maintaining them properly will be even more work (and most likely won't happen on most functions), and despite all that extra work the final stats will be just about as meaningless.
If you want to measure the quality of the implementation a much better way IMO is to measure regression test coverage. This may also motivate people to write more tests, which would do a lot more good than spending time adding magic comments all over the source.
Hi Alexandre,
On Sunday 13 November 2005 20:31, Alexandre Julliard wrote:
If you want to measure the quality of the implementation a much better way IMO is to measure regression test coverage. This may also motivate people to write more tests, which would do a lot more good than spending time adding magic comments all over the source.
That's a great idea (though it's measuring a different kind of quality than the one I was thinking of). If I understand correctly, this would mean building all of wine with 'make CFLAGS="-fprofile-arcs -ftest-coverage"' (as documented by Aaron Arvey), running wine-test and then have some script recurse into the dll directories and analyse and build summary information out of the '*.c.gcov' files?
Bye,
On Sun, 2005-11-13 at 13:31 -0600, Alexandre Julliard wrote:
It's a rough estimate, but it has the advantage of not requiring code changes. Adding special comments all over the place is going to be a lot of work, maintaining them properly will be even more work (and most likely won't happen on most functions), and despite all that extra work the final stats will be just about as meaningless.
With one exception: knowing that something is complete as far as documentation allows us to know is rather important. This is a fairly immutable point in the development of a function, and having reliable information about that is golden. Otherwise, one spends a lot of time just figuring out if something is supposed to work but doesn't, or it's simply an unimplemented aspect that need love.
So yeah, we don't need to add "STUBBED" or "INCOMPLETE" all over the place. But as people working with that code know (by the nature of their work) that something is DONE, we should mark it as such.
Dimi Paun dimi@lattica.com writes:
With one exception: knowing that something is complete as far as documentation allows us to know is rather important. This is a fairly immutable point in the development of a function, and having reliable information about that is golden. Otherwise, one spends a lot of time just figuring out if something is supposed to work but doesn't, or it's simply an unimplemented aspect that need love.
So yeah, we don't need to add "STUBBED" or "INCOMPLETE" all over the place. But as people working with that code know (by the nature of their work) that something is DONE, we should mark it as such.
The only way to know that something is done is to write a test case for it and check the behavior against Windows. MSDN cannot be trusted, and marking code as DONE just because it follows what happens to be in the doc today is very misleading.
On Mon, 2005-11-14 at 03:32 -0600, Alexandre Julliard wrote:
The only way to know that something is done is to write a test case for it and check the behavior against Windows. MSDN cannot be trusted, and marking code as DONE just because it follows what happens to be in the doc today is very misleading.
I'm not arguing against test cases -- it's obviously better to have test cases. However, even if we do have them, there's no way of knowing that we are testing all documented aspects. In fact, I doubt we have such exhaustive tests right now.
I know that the docs are not perfect, but they are still an important milestone, and a relevant one in the vast majority of cases. It's been important (for me at least) to know the level of completeness versus documented behavior for the common controls. This is useful today, and it's a fraction of the effort compared to writing tests. I think they are complementary.
Dimi Paun wrote:
On Mon, 2005-11-14 at 03:32 -0600, Alexandre Julliard wrote:
The only way to know that something is done is to write a test case for it and check the behavior against Windows. MSDN cannot be trusted, and marking code as DONE just because it follows what happens to be in the doc today is very misleading.
I'm not arguing against test cases -- it's obviously better to have test cases. However, even if we do have them, there's no way of knowing that we are testing all documented aspects. In fact, I doubt we have such exhaustive tests right now.
I know that the docs are not perfect, but they are still an important milestone, and a relevant one in the vast majority of cases. It's been important (for me at least) to know the level of completeness versus documented behavior for the common controls. This is useful today, and it's a fraction of the effort compared to writing tests. I think they are complementary.
and how do you deal of MSDN docs evolving ? I'm not even speaking of evolution because of new features, but MSDN gets updated from questions asked on the KB. A+
From: "Eric Pouech" eric.pouech@wanadoo.fr
and how do you deal of MSDN docs evolving ? I'm not even speaking of evolution because of new features, but MSDN gets updated from questions asked on the KB.
It's true. For this reason I've clearly marked on the common controls the date and exact version of the DLL against which I made the audit.
I don't know how to solve the problem 100%. But truth is, for a lot of stuff out there docs are not that volatile. We can note the version of the DLL that we audited against, for example: STATUS: COMPETE as of 2.3.4 or somesuch (maybe something less verbose?).
On Sat, 2005-11-12 at 01:44 +0100, Markus Amsler wrote:
I like the states from Michael, and the short @-notation from ReactOS. The implementation status is just a flag, writting a whole section is IMO an overkill. Any agreements on this proposal:
@-notation, 5 states:
I think this notation is not consistent with the one we're already using. How about:
STATUS: STUBBED|SKETCHY|SUBSTANTIAL|COMPLETE|REVIEWED
Also, the REVIEWED status is not clear it is part of the enumeration. It seems something that's orthogonal to the completeness issue. For example, the listview control is audited (reviewed), but is not complete.