# Complex Exponents

## Recommended Posts

How does one go about calculating complex exponents? To be more specific, what is the value of ai, abi, and ac + bi? While reading a book about prime numbers, I came across a section about Riemann's Hypothesis. From what I understand, Riemann's Hypothesis has to do with the graph of the zeta function when it is fed a complex number. The zeta function uses its argument as an exponent.

##### Share on other sites

1) Euler's Formula says that e^(ix) = cos x + i sin x. (you can read about this, here http://en.wikipedia.org/wiki/Euler's_formula for example) If a is a real number, by properties of logarithms a = e^(ln a). Therefore, a^i = (e^(ln a))^i. Since the normal rules of powers still apply, this = e^(i ln a), and by applying Euler's formula this is cos (ln a) + i sin (ln a).

2) a^(bi) = (e^(ln a))^(bi) = e^(i*b(ln a)). Therefore applying Euler's formula again, we get cos (b(ln a)) + i sin (b(ln a)).

3) a^(c+bi) = a^c times a^(bi) by laws of exponents, so multiplying a^c through number 2, we get a^c(cos (b(ln a)) + i sin (b(ln a))).

Now, this assumes that a is a real number. It gets a bit more complicated if it's not. If you want more information or if I was unclear, let me know.

(throughout, I have used ^ as notation for (to the power of))

##### Share on other sites

Thank you. I took a look at the proof on Wikipedia and it made good sense to me. Now I just have a few more questions. First of all, I want to be sure in my thinking that 22i is an irrational number in both dimensions. I will show the steps I took to calculate. If anything here is wrong, please tell me.

22i = cos(2(ln(2))) + i(sin(2(ln(2))))

22i = cos(2(.693)) + i(sin(2(.693)))

22i = cos(1.386) + i(sin(1.386))

22i = .183 + .983i

It just seems strange that an "integer" imaginary number used as an exponent of another integer would be irrational.

Also, where did mathematicians come up with those formulas for sines and cosines? I always wondered about that. I used to think that they had to draw triangles with a protractor and then measure sides to come up with those until I realised that a calculator would not have enough memory to store tables of those functions.

##### Share on other sites

Also, where did mathematicians come up with those formulas for sines and cosines? I always wondered about that. I used to think that they had to draw triangles with a protractor and then measure sides to come up with those until I realised that a calculator would not have enough memory to store tables of those functions.

A calculator will find trigonometric values using the power series expansion. A Maclaurin series (special case of the Taylor series) is an infinite series which approximates a function. You can get any desired degree of accuracy by taking as many terms as desired.

For example, sin x = x - x^3/3! + x^5/5! - x^7/7! + ...

This series converges extremely rapidly, and the error is easy to estimate because the signs alternate. The error is less than the next missing term -- for example, if I cut off the series after 4 terms, the error will be less than x^9/9!.

It just seems strange that an "integer" imaginary number used as an exponent of another integer would be irrational.

Your calculations are fine. Most numbers raised to the integral multiples of i will not be "nice" any more than cos 1 is nice. (Btw, for a nifty trick, you can check your calculations by googling 2^(2i). Google calculator will automatically find it for you.)

##### Share on other sites

Could you show me a proof for the validity of the series? I always feel guilty if I use a formula and don't know how or why it works.

##### Share on other sites

Could you show me a proof for the validity of the series? I always feel guilty if I use a formula and don't know how or why it works.

You simply do a Taylor series expansion of the sine function about zero. Just calculate the terms.

All the even terms go away because the even derivatives would reproduce the sine, with sin(0)=0 - the underlying reason, of course, being that the sine is an odd function.

##### Share on other sites

You simply do a Taylor series expansion of the sine function about zero. Just calculate the terms.

All the even terms go away because the even derivatives would reproduce the sine, with sin(0)=0 - the underlying reason, of course, being that the sine is an odd function.

This is correct, so I will not bother restating it :D. I will add that this is taught in calculus 2, and if you have not had calculus it probably won't make any sense.

However, if you have had calculus but it's been a while, here's a blog which includes two step-by-step derivations:

http://blogs.ubc.ca/...ansion-of-sinx/

(note that the next-previous links at the bottom lead to the expansions for e^x and cos x)

##### Share on other sites

I am a ninth-grader who has never taken calculus, so I often have a hard time understanding complicated proofs. I like math very much though, so if I wind up learning calculus from this thread, I won't be surprised. So I will ask a few questions here and maybe the proof will become clear to me. Is a derivative the result of a function? It looks like it is to me, because, as the ex page on that site says, when f(x) = ex and when x = 0, f(x) is always equal to 1 (because anything to the power of 0 is 1). So if x = 1, would the derivative be e? Or e2 when x = 2? And back at the first page, is d in the formula in step 1 the derivative?

##### Share on other sites

Not to discourage you, but I don't think this is going to work in a thread. If you are interested in learning calculus, get yourself a textbook - a used older edition of Stewart's Single Variable calculus will cost you a few \$ online- and start reading. It will really make sense, conceptually, calculus is not really difficult.

I, and probably some others on this board, will be happy to answer specific questions that arise.

All the best.

##### Share on other sites

I didn't mean that I was hoping to learn all of calculus, but I just hoped that someone might answer some questions for me. My mother, who is a poster here, told me that I might enjoy asking about these things on the self-education forum. If there is no way to demonstrate these formulas without using calculus, I'll come back to this once I've studied some basic concepts in calculus. I have an interest in math, and enjoy reading about and trying to figure out concepts ahead of my formal studies.

##### Share on other sites

Algorithm, since you seem to really enjoy thinking about mathematics, you might enjoy taking a look at What is Mathematics? by Courant, Robbins, Stewart. It's a survey of lots of different mathematical fields, including prime numbers, complex numbers, the Zeta function, calculus and much more. You wouldn't sit down and read this from cover to cover, but it might be handy to have on hand for those questions that seem to come up.

##### Share on other sites

OK, let me try to give you the two basic ideas of calc, so you have a starting point.

The Derivative is a concept that allows us to find the slope of a function at any given point. If you draw a straight line through any two points, you know that the slope is rise over run, or (f(x2)-f(x1)/(x2-x1). The main idea is to let the points slide closer and closer together, making the x2 and x1 closer together and the run basically zero. In this limit, the slope of that line becomes the slope of a tangent on the curve. this is called the derivative, and part of calculus explores how to calculate derivatives of functions. There are procedures to do that without having to go through the actual line/slope process.

Now, the Taylor series is a kind of approximation. Imagine you have a function and you want to know how big the value is approximately close to some point x_0 for which you know the functional value. So, you know that if your x is close to x_0, the functional value has to be close to f(x_0). Now of course that is not entirely correct if the function is not constant. So, you pretend the curve is straight (which is very good if you are close), and correct for the deviation. That works in the derivative. Now, since the curve is not exactly straight (unless it is linear), you have to correct for curvature, and get a second term which is related to the derivative of the derivative. If the curve is not a quadratic function, this, too, is not exact, so you keep correcting with higher and higher terms involving higher derivatives. The sum of these terms is called a Taylor Series and can be calculated for any function which is well behaved (it can't have holes or kinks or jumps).

That's where those formulas come from.

The other main idea of calculus is the integral, which has to do with finding the area under a curve. There is a surprising relationship between this and the derivative. Doing integrals is much more complicated, since there is no general rule how to find them for all functions. that's why the calculus text will have only one or two chapters on derivatives, but many more on integration.

##### Share on other sites

Algorithm, how much math have you had? What are you formally studying now? This would help with recommendations for further reading.

##### Share on other sites

Actually, I've read parts of that What Is Mathematics? book before. I never got to the part about calculus, because they were so in-depth that I thought it would be too hard to understand. I found it very useful for stuff like whole-number solutions to the Pythagorean Theorem and prime numbers.

I didn't know that mathematicians had ways of correcting the curved functions. Are more steps required the higher the exponent is? For example, would a quintic function require more steps for adjustment than a quartic function, and so forth?

I'm doing Geometry this year (grade 9) and I've already taken Algebra I. I think that Algebra II concepts would probably come easily for me though; I've read about things like logarithms, factorials, and the trig functions for interest and understand how to use them.

##### Share on other sites

Actually, I've read parts of that What Is Mathematics? book before. I never got to the part about calculus, because they were so in-depth that I thought it would be too hard to understand. I found it very useful for stuff like whole-number solutions to the Pythagorean Theorem and prime numbers.

You might want to take another look at it sometime soon, especially after you've mastered algebra and geometry. Another book that helped my daughter get an intuitive feel for calculus was Calculus for the Forgetful by Wojciech Kosek. It's a slim little book that gets to the heart of the subject without a lot of details. (it does take a mature reading level) After that she filled in the details of calculus with a more standard book.

I didn't know that mathematicians had ways of correcting the curved functions. Are more steps required the higher the exponent is? For example, would a quintic function require more steps for adjustment than a quartic function, and so forth.

Regentrude was talking above about approximating functions f(x), which can be a trig function (or exponential or rational or any kind of function) near a point x0 by using a series of polynomial functions. We like to do that in math because polynomials (a + bx + cx^2 + dx^3+ ...) are well-behaved and usually easier to work with.

It turns out that you can use calculus to find the values of the coefficients a, b, c, d, ... in that expansion. We can start by finding just the a+bx part of the series, which is a straight line approximation to f(x) [remembering how straight lines are written as y = bx +a].

Each new term in the series corrects the previous approximation a little bit more, introducing more curvature by adding a higher power of x. The further out you go in the series, the better the approximation. If you go infinitely far, and if the f(x) function is nice & smooth enough, the series sums exactly to f(x), at least in an interval around your point x0.

Now, the formulas that allow you to calculate a, b, c, etc, for any given function f(x) and point x0 involve the derivatives of f(x) (which are functions themselves!) evaluated at x0. As you go further out, you need higher order derivatives.

So, let f(x) = sin x. Suppose we want to approximate sin x near x=0:

We can use calculus to figure out that, near x=0, sinx is approximately equal to the straight line approximation

sin 0 + f '(0)*x

= 0 + cos(0) *x

= 0 + 1*x

= x

(here f '(x) is the first derivative function for sin x, which actually equals cos x)

So sin x is approximately equal to x near 0. Try it on your calculator. For small radian values, sin x and x are nearly equal.

We can go on and use more terms in our approximation of sin x. Each term involves calculating a higher derivative of sin x. Using more and more terms gains us more accuracy. If we go on forever, we get an exact expression for sinx made up of infinitely many polynomial terms, & it involves calculating all orders of derivatives of sin x. That creature is what we call a Taylor (or Maclaurin) series for sin x. It is:

sin x = x - x^3/3! + x^5/5! - x^7/7! + ...

This one happens to be valid for all real numbers x. That doesn't always happen; usually it's only valid in an interval around x0. Your calculator uses this series expansion to find values of sin x when you press the sin button.

Does that help at all? If not, feel free to keep asking questions. :001_smile:

##### Share on other sites

I didn't know that mathematicians had ways of correcting the curved functions. Are more steps required the higher the exponent is? For example, would a quintic function require more steps for adjustment than a quartic function, and so forth?

If you mean for the Taylor series approximation? Yes, that would be so. But the REALLY cool thing is that the technique is not restricted to polynomials themselves - you can approximate functions like sine and cosine and exponential functions too! The more terms you include, the more accurate the approximation, but if you are close to your known reference value, the higher terms become smaller and smaller because they go in powers of the distance from that position. Really neat stuff.

##### Share on other sites

Thank you for this insight. You're making me even more interested in calculus than I was before! I will certainly read the recommended books if I can get them from our library system. Is there a uniform system for finding how to calculate these derivative functions?

##### Share on other sites

Hi Algorithm!

Let's see. The derivative of f(x) is also a function, usually called f '(x).

The value of the derivative f '(x0) is defined to be the slope of the graph of y = f(x) at x0. What is the slope of a curved graph at a given point on the graph? It's defined to be the slope of the tangent line to the graph at that point. [A tangent line is a straight line that intersects the graph at exactly one point].

You can find a formula for f '(x) by realizing that for values of x very near x0, the ratio

[f(x) - f(x0)] / [ x-x0]

represents the slope [rise over run] of the secant line to the graph of y = f(x) that goes through the points

(x0, f(x0)) and (x, f(x)). [A secant line is a just a straight line going through two points on the graph]

Now suppose that x approaches x0. At the same time, that secant line approaches the tangent line to the graph of y = f(x) at the point (x0, f(x0)).

So as x --> x0, (read this "as x approaches x0"), the slope ratio above approaches the slope of the tangent line to y = f(x) at x = x0. But, hey! that's the same thing as the derivative of f(x) at x0 . So:

f '(x0) = limit [f(x) - f(x0)]

. ............ x-->x0 [ x-x0 ]

That's the basic formula for finding a derivative function. It takes a bit of practice to be able to calculate the limit above for a given function f(x). Don't worry, you'll get plenty of practice in calculus class someday!

And fortunately, you'll find a lot of patterns. For instance, if you apply this method to powers of x: f(x) = xn, (n not equal to 0), you can show that the derivative function is always f '(x) = n x n-1. Nice, huh?

##### Share on other sites

And so the derivative is written as function-name'(x)? Making the derivative of f(x) = f'(x) and so on?

##### Share on other sites

And so the derivative is written as function-name'(x)? Making the derivative of f(x) = f'(x) and so on?

Yes. That's one notation. It can also be written dy/dx.

##### Share on other sites

Ah, I was going to post, but forgot. Algorithm, if you're looking for more fun and interesting reading in this general area, you might try "e: the story of a number", by Eli Maor. I was given it for Christmas this year and enjoyed it. Parts of it require knowledge that you won't have yet (especially calculus), but since you clearly aren't thrown off by that, you may enjoy it.

## Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account. ×   Pasted as rich text.   Paste as plain text instead

Only 75 emoji are allowed.