On 4/2/20 19:42, Connor McAdams wrote:
I will see if I can get an Alberth method function written without too much difficulty and test it out, maybe run some speed tests between the two.
I still can't understand the motivation why would you pursue a more complicated and general method which is not required for the problem, has more work to do by finding all the complex roots and having special requirements for initial value estimation. Analytical solution is probably good enough, even using doubles there is probably an overshoot. If for any reason you don't like it one root can be found by bisection. That is especially handy given you are interested in roots in 0.0 - 1.0 interval only, so you immediately know how to bound your search instead of caring about correct initial estimation for a generic iterative method). Then, given the polynomial is P_3(x) = a*x^3 + b*x^2 + c*x + d, and x1 is root (that is, P_3(x1) = 0), factoring this root out gives:
P_2(x) = a*x^2 + (b + a*x1)*x + (c + b*x1 + a*x1^2)*x.
What do you expect to go wrong with these quadratic polynomial coefficients accuracy? Especially given you are interested in a root in [0-1] interval only? If you could expect some special polynomial coefficients there that would mean that you would have troubles just evaluating such a polynomial without searching for any roots and would have to reformulate the problem in some way, but I doubt this is a real concern.
I suppose it would be more practical to make some tests for this, from higher level (testing how the intersection works, including degraded and corner cases like vertical lines, horizontal lines etc.) to the inner root finding solution function (given it is very easy to test by substituting the found roots in the initial polynomial).
Also things like this:
+ theta = -atan2f(q[1]->y - q[0]->y, q[1]->x - q[0]->x); + /* Rotate the Y coordinates of the cubic bezier. */ + y[0] = (p[0]->x - q[0]->x) * sinf(theta) + (p[0]->y - q[0]->y) * cosf(theta); + y[1] = (p[1]->x - q[0]->x) * sinf(theta) + (p[1]->y - q[0]->y) * cosf(theta); + y[2] = (p[2]->x - q[0]->x) * sinf(theta) + (p[2]->y - q[0]->y) * cosf(theta); + y[3] = (p[3]->x - q[0]->x) * sinf(theta) + (p[3]->y - q[0]->y) * cosf(theta);
Apart from computing sin and cos 4 times each, you can avoid using any transcendental function by normalizing (q[1]->y - q[0]->y, q[1]->x - q[0]->x) vector and just using its components instead of sin, cos.
Also, I guess I should switch over to doubles then as well. I'll see if that makes any differences.
On Wed, Apr 1, 2020 at 1:14 PM Paul Gofman gofmanp@gmail.com wrote:
On 4/1/20 20:03, Giovanni Mascellani wrote:
Il 01/04/20 18:46, Paul Gofman ha scritto:
Given the complex roots are not needed here and the polynomial is always cubic, is this generic method really beneficial? It would probably be simpler and quicker to find one root x1 with simple bisection, then divide the polynomial into (x - x1) and deal with remaining quadratic equation.
This kind of division is typically numerically unstable. It might be that for cubic polynomials the problem is not very apparent,
Yes, factoring out the roots from a high degree polynomial can accumulate the error, but how's that a problem for just one root?
Also, I think just using double precision in analytical solution will avoid any practical stability problems in this case.