Jump to content

How to calculate approximation error of trigonometric functions


Recommended Posts

I recently reread my undergrad Calculus book and read in the section on Taylor series that Taylor polynomials are used in programming languages to approximate trigonometric functions. I have a few questions:

  • Are Taylor polynomials actually used or is there a better method? (Skepsis comes from the number of divisions in these polynomials although I guess you could precompute most)

  • If yes, then of which degree is the polynomial?

  • How do you measure the approximation error compared to the real function? I can imagine that an approximation can be compared to one that is even more precise due to a higher degree. But that comparison would be between two approximations.

Link to comment
Share on other sites

  • 4 months later...

My understanding is that:

On embedded system implementations CORDIC algorithms are used.  Entire courses exist on this subject.

At larger scale, software libraries typically will store the results in 1/2 degrees in a table and anything smaller is interpolated between those 1/2 degree increments.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Create New...

Important Information

This website uses cookies to ensure you get the best experience on our website. See our Privacy Policy and Terms of Use