On 05/16/2017 05:50 PM, Matteo Bruni wrote:
2017-05-15 22:24 GMT+02:00 Paul Gofman gofmanp@gmail.com:
On 05/15/2017 09:19 PM, Matteo Bruni wrote:
2017-05-12 14:24 GMT+02:00 Paul Gofman gofmanp@gmail.com:
@@ -163,6 +163,7 @@ struct d3dx_const_tab unsigned int const_set_size; struct d3dx_const_param_eval_output *const_set; const enum pres_reg_tables *regset2table;
- ULONG64 update_version; };
Is it necessary? Could you do without it, using the pass version instead?
I think I can, but the reasons I did this were:
- The logic in patch 5 (removing redundant constant table updates &
preshader recomputes) won't work without it. Imagine an array selector with shader 1 selected, when after BeginPass() or Commit() it will update the version of pass. If another shader is selected later it might not get updated for the some parameter changes, so I will have to ultimately recompute all preshaders and reset constant table parameters on update_all, which I can avoid using this const_tab own version. Even if I manage to workaround this case with array selector somehow, redundant shader constant updates and preshaders recomputes can happen if different passes or techniques use the same shader
That seems like a good reason. Do you think it's possible to write a test for it (i.e. does this kind of behavior have some visible effect in black-box testing?)
I don't see how the choice between the options here can be exhibited to black box behaviour, unless I figure out some way of "probing" it through race conditions behaviour or something like that which looks a bit insane to me and would require some guessing on native d3dx structures layout. Its all about saving preshader executes and table updates in some cases (which should be producing the same results anyway) and not passing through 'update_version' to all preshader functions.
Do you mean a pointer in the update_version in the struct d3dx_parameter which will point to either shared data or the update version in the same parameter? I would like to avoid this if possible as it makes extra indirection for non-shared parameters. I don't have exact performance difference exactly for such a change though, but it looks like removing some indirections here and there (after a more global optimizations of set_constants function) gives some moderate improvement.
It would look a bit nicer though. Anyway, I don't really mind.
I then suggest to maybe rethink it once again at later stage. I already have the patch which makes different structures for top level parameter and members which saves a few fields for members, I would feel less regret adding an extra pointer field to just top level parameter :) I am not sure yet if that patch is useful at all though as I could not yet reliably measure any performance improvement from it. Anyway, I am likely to experiment more with finer perfomance optimizations after the bigger ones, and at this point it may eventually become beneficial to yet change a few things around this.
Not incrementing the version on update likely won't break anything, if
to change the initialization a bit (increment version on effect creation) and carefully watch comparison condition for a sort of "off by one" errors. At least now I cannot figure out what can go wrong, though I need to test it more and to recheck the code paths thoroughly to be sure if it is ok. It looked natural to me to increase the version on parameter update. Don't you think that verifying the overall logic if we don't increment becomes a bit mind breaking? Incrementing the version every time makes the 'updated' check straightforward for any case, regardless of how parameters update sequence interfere with updates 'consuming' sequence. Otherwise we need to check all the possible use cases of "dirty" parameters to be sure.
AFAICS it shouldn't make any difference in behavior, unless there is some bug lurking (and in that case exposing the bug would be a good thing IMO).
Or if you suspect it may have some indirect performance impact (which I
am keen to test) maybe I could test it as a separate change after some other (more severe) optimization, when I potentially can see this effect more clear?
Yeah, I don't see it having any measurable impact but it's still unnecessary (if trivial) work.
I don't mind looking into it later, I mentioned it mostly to doublecheck that we're on the same page.
I totally agree, I don't see how it can break anything either, moreover, I already briefly tested such a change before replying yesterday (there are couple of more small things to do rather than just not to increment counter to make it work here, but still trivial). Just having a unique version for each parameter update and each "commit" feels like bit less fragile and easier to work with to me, with version equality between pass update_version and parameter update version being impossible and parameter update sequence counting going completely independent of how these updates are accepted by "readers". So unless you feel not comfortable with this increment I would keep it, otherwise can change it.