Decay and growth rates

NOTE: This page uses MathJax, a JavaScript program running in your web browser, to render equations. It renders the equations in three steps: 1. during loading you may see the “raw code” that generates the equations, 2. an HTML version of the equations is pre-rendered (functional, but not nice looking), then 3. a full and proper “typeset” rendering of the equations. Depending on the speed of your system and browser, it may take a few seconds to tens of seconds to finish the whole process. The end product is usually worth the wait, imho.

This is the next instalment in Phelonius’ Pedantic Solutions which aims to tackle something I wish I had when I was doing my physics degree: some step-by-step solutions to problems that explained how each piece was done in a painfully clear way (sometimes I was pretty thick and missed important points that are maybe obvious to me now, looking back). I have realized I could have benefited from a more integrated approach to examples, where whatever tools that are needed are explained as the solution is presented, so that is at the core of these attempts. I have also been told all my life that the best way to learn something is to try to teach it to someone else, and so this is part of my attempt to become more agile and comfortable with the ideas and tools needed to do modern physics. If you don’t understand something, please ask, because I haven’t explained it well enough and others will surely be confused as well.

Things covered:

  • the exponential growth and decay equation
  • how to figure out the differential equation for a simple process
  • differentiating and integrating \(e^x\) and its inverse function \(ln(x)\)
  • single-variable indefinite integration and integration by substitution
  • solving a simple differential equation (exponential solution)

I was working with the decay rates of fundamental particles, and realized there were a number of techniques I could share that I was confused about when I first encountered them. The process I was looking at is governed by the rather simple looking formula \(dN(t) = -\Gamma N(t) dt\), which reads: the infinitesimal of N (a function of time), is equal to minus (capital) Gamma times N (a function of time) times the infinitesimal of time. This is a differential equation, and its solution is \(N(t) = N(0)\cdot e^{-\Gamma t}\), which reads N (a function of time) is equal to N (at time 0, i.e. the initial count or measurement after which we measure changes in the system) times \(e\) (a very special, but simple, number \(\approx 2.71828\)) to the power of minus (capital) Gamma multiplied by time. Many natural phenomena can be described by using the above equation describing decay and growth rates. In physics, the easiest one that comes to mind is particle decay, where particles (which could be in the nuclei of atoms, in which case it is a form of radioactivity) spontaneously decay into other particles and various forms of energy. The same equation can be used to describe reagent amounts in chemical reactions, aspects of biological systems or populations, and a host of other phenomena. It’s a good concept to understand fully and be able to work with. I will show how to solve the differential equation to get the solution above; but first, I will dig down into how to derive the differential equation in the first place, and what each piece means.

“A differential equation is a mathematical equation that relates some function with its derivatives. In applications, the functions usually represent physical quantities, the derivatives represent their rates of change, and the equation defines a relationship between the two.” [1]. In this case, the physical quantity is the amount of something, \(N\), which changes over time (thus, \(N\) is a function of time \(t\), or \(N(t)\) for short). The quantity \(N\) can be the number of atoms in a block of radioactive metal, the amount of a reagent left in a chemical reaction, or the number of bacteria in a culture, to name but a few instances of where this is useful. \(\Gamma\) is just a number that determines how fast or slow the quantity of whatever is being measured will change over time (e.g. 0.9, 2.6, etc.) — in other words, it defines the relationship between the quantity and its derivative (and recall that a derivative simply measures the rate of change of something). The fact there is a negative sign in front of the \(\Gamma\) indicates that \(N\) will decrease over time (presuming that \(\Gamma\) is a positive number), thus this equation determines the “decay rate”. If \(\Gamma\) is a negative number in this specific equation, then the relationship is positive, and it determines the “growth rate” (conversely, you could drop the negative sign from the equation and use positive values for \(\Gamma\) and get the same growth rate differential equation).

Check out my previous post Relativistic energy expansion for a brief discussion of the nuts and bolts of how to take the derivative of a function, but the way it is being used here is somewhat different from a conceptual standpoint (it’s the same under the hood, but it is approaching it from a different conceptual direction). In thinking about something like radioactive decay, if you could instantly count all the undecayed nuclei in a sample at some point in time (call it \(t_{initial}\)), you would get a count, which we can call \(N_{initial}\). After some period of time (say \(t_{elapsed}\)), if you instantly counted all the undecayed nuclei again (call it at time \(t_{final}\)), some may have disintegrated by radioactive decay and you would have fewer undecayed nuclei, let’s call it \(N_{final}\). If any nuclei have decayed at all during the time period \(t_{elapsed}\), then \(N_{initial} > N_{final}\) and \(N_{initial}\, – N_{final}\) nuclei will have disintegrated. As an example, if there are 1000 radioactive atoms and half of them will, on average, decay over 20 minutes, then after 20 minutes if we count them, there should be around 500 undecayed ones left.

As an aside, because radioactive decay is, as far as we have been able to tell, a truly random quantum process, what this really means is that if you take many samples of 1000 radioactive atoms and wait 20 minutes for each of them (in parallel or in sequence, it doesn’t matter), and then average the number of undecayed atoms left in each of the separate experiments, then that average number should get closer and closer to 500 undecayed atoms the more times the experiment is run–— this is a statistical process called the Law of Large Numbers [2]. The actual number of nuclei that decay in any given run of the described experiment will randomly be between 0 and 1000, with values around 500 being more likely than the extremes of the range. Using quantum mechanics, one can calculate with certainly what the probability will be for any given result, but there is no way whatsoever to predict which result it will actually be if the experiment is run. As I like to say, there is a finite and calculable probability that all the molecules in a pot of water about to boil will hit each other in such a way that it will instantly freeze into a block of ice, but that probability is so mind-bogglingly low that it will never happen while our universe exists. In this case, there are so many atoms all bouncing around randomly in the about-to-boil water that if we do this experiment over and over again, we will see that it starts to boil at about the same time, every time (presuming all the conditions are exactly the same, which is something we usually can’t control that precisely, so we will always see some variation).

When coming up with an equation to measure a changing quantity, using clunky numbers like “quantity 1000” or “20 minutes” can cause real headaches because the rate of change varies with time in these sorts of systems. Going back to the above example, if starting with 1000 undecayed atoms, then after 20 minutes about 500 should be left, so the rate of change is “minus 500 every 20 minutes”. Since 500 are left, if we wait another 20 minutes, then only 250 will be left, so the rate of change is “minus 250 every 20 minutes”. Because the rate of changes varies itself, a more sophisticated technique needs to be used to come up with a model of its behaviour, and that’s where infinitesimals and the instantaneous rate of change come in. We don’t want to determine the rate of change over a time interval (like 20 minutes), but rather want to know what the rate of change is at any instant in time. In the example I’ve given, when we start, the instantaneous rate of change is –500 per 20 minutes, but at \(t\) = 10 minutes, the instantaneous rate of change is about –490 per 20 minutes, and at \(t\) = 11 minutes, the instantaneous rate of change is –473 per 20 minutes (since it depends on how many undecayed atoms are left, which is always decreasing). The instantaneous rate of change at the first 20 minute mark (when about 500 undecayed atoms should be left) is actually about –347 per 20 minutes, and not –500 or –250 or any other number that we could hope to access with simple tools!

Determining the instantaneous rate of change is accomplished by determining how much change there is over an infinitesimally small amount of time, say between time \(t\) and time \(t + \Delta t\), as \(\Delta t \rightarrow 0\) (\(\Delta t\) is a very tiny constant that we conceptually set as close to zero as possible without being zero). We call this amount of time \(dt\). Since the amount of time is so small (infinitesimally small), the amount of change in what we are measuring will likewise be very small (infinitesimally small as well) and we call this amount \(d N(t)\). The rate of change, or slope, of the function on that microscopic time interval is \(d N(t)/dt\) (just rise over run). If we know how to calculate that tiny number \(d N(t)\) over any infinitesimal time interval (remember, the time interval is \(t + \Delta t\) as \(\Delta t \rightarrow 0\)), then we can calculate \(d N(t)\) on that interval. The critical conceptual leap is treating \(d N(t)\) as one value under the assumption it changes so little over the time interval it can be treated as a constant — this is the power of the infinitesimal paradigm! If we add up all the infinitesimals \(d N(t)\) calculated for all the infinitesimals \(dt\) over a longer time interval, say 20 minutes (subdivided into an infinite number of infinitesimal time chunks of equal length, \(dt\)), then we should know the total change in \(N\) over that longer time interval.

Summing up the infinite number of infinitesimal changes over a given interval is the process of integration, sometimes called the antiderivative. You can think of \(d N(t)\cdot dt\) as being the area of a very small rectangle (very thin in the time direction and \(d N(t)\) high), and by adding up all the rectangle areas over all the \(dt\) slices over a longer time interval (20 minutes in this example), you get the area under the curve of function \(N(t)\). Integration is also the inverse operation to differentiation (taking the derivative), e.g. if \(x(t) = 2t^{3} + 8\), then \(dx(t)/dt = 6t^2\), or we can bring the \(dt\) to the other side and write \(d x(t) = 6t^2 dt\) (where we are just multiplying both sides by \(dt\) from the right to rearrange the equation). Then the integral of \(d x(t)\) is \(\int d x(t) = \int 6t^2 dt \rightarrow\) \(x(t) = 2t^3 + C\) (where, in this case, we know the initial conditions so we know \(C = 8\), therefore \(x(t) = 2t^3 + 8\)). Dealing with the constants from integration will be addressed another time, just remember that because differentiation strips off any constants, when we integrate to get the original equation back (i.e. perform the inverse function), we have to put those constants back in (even though we may not know what the values are without additional information about the system).

Going back to the initial differential equation and how it was derived in the first place, it just states the relationship that we already know, but in mathematical terms. Specifically, we know that as time elapses, there will be fewer and fewer of whatever is being measured. In the case of radioactive decay, we know from experiment that the relationship is a simple one: over a set period of time, some fixed percentage of the nuclei will probably decay, so the relationship is also represented by a fixed value, which we call \(\Gamma\). Other phenomena may have more complicated relationships that need to be represented by a function of their own rather than a constant (which could include nonlinearities or other complex features). Again, the decay or growth equation being examined here is both simple and occurs everywhere in nature and technology, so it has widespread practical use. Looking at it again from the infinitesimals viewpoint, the right side says that if we have \(N\) somethings at time \(t\) (remember, \(N\) is always changing, so we write that it is a function of time, or \(N(t)\), and here we’re looking at that quantity \(N\) at an arbitrary but specific time \(t\)), then after an infinitesimal time \(dt\), there will be \(d N\) fewer of them. How many fewer? We label that quantity \(\Gamma\), which we take to be a positive number (\(\Gamma \gt 0\)), and put a minus sign in front of it to indicate that \(d N\) at the given time will be a negative number (there will be \(d N\) fewer somethings). This is a very general idea and in this case it uses the simple number \(\Gamma\) to relate \(d N\) to \(N\) at a given time \(t\). A more explicit way of writing the infinitesimals is as follows (keeping in mind that the time interval is so small, that the change over that interval can be considered constant for all intents and purposes):

\[\frac{N_{final} – N_{initial}}{t_{final} – t_{initial}} = -\Gamma N_{initial}\]

The top of the fraction is the change in the count of whatever it is we’re measuring over the infinitesimal amount of time that has elapsed. We know that the result will be some fraction of the initial count, and we call that fraction \(-\Gamma N_{initial}\). If \(N_{final} > N_{initial}\), then it’s a growth relationship and the fraction will be positive; if \(N_{final} > N_{initial}\), then its a decay relationship and the fraction will be negative. By convention, \(\Gamma\) is always positive, so for a decay equation, a minus sign is placed in front of it to make the fraction negative. Remember that \(t_{final}\) is \(t_{initial}\) (which we write simply as \(t\)) plus an infinitesimally small time increment \(\Delta t\) (where \(\Delta t\rightarrow 0\)). Therefore, \(N_{initial}\) is taken at time \(t_{initial} = t\), and \(N_{final}\) is taken at time \(t_{final} = t + \Delta t\). So, this is written mathematically as:

\[\frac{N(t + \Delta t) – N(t)}{(t + \Delta t) – t} = -\Gamma N(t)\rightarrow\frac{N(t + \Delta t) – N(t)}{\Delta t} = -\Gamma N(t)\]

Recall that \(N(t + \Delta t) – N(t) = dN(t)\) and \((t + \Delta t) – t = dt\). The value of \(\Gamma\) is chosen based on the system so it matches the observed or expected behaviour. The trick then becomes to find the function \(N(t)\). So,

\begin{equation}
\frac{dN(t)}{dt} = -\Gamma N(t)\rightarrow dN(t) = -\Gamma N(t) dt\label{diff}\tag{1}
\end{equation}

which is the differential equation relating the rate of change of \(N\) to the instantaneous quantity \(N(t)\).

As stated above, integration is the inverse function of differentiation, and is required to put any of this to use. Unlike differentiation, integration is a nasty business and often cannot be done using a set of a few simple rules. In some cases, the only known way to do an integration is numerically using computers (there is no analytic solution solely based on manipulating the equations symbolically). With that said, we only need the most basic of integration techniques to deal with the problem at hand (phew!). In fact, most of what needs to be known revolves around the number \(e\).

\(e\) is special because the derivative of \(e^{x}\) is \(e^{x}\)! Since the derivative measures the rate of change of a function, it simply means that at any point \(x\), the slope of the function \(e^{x}\) is \(e^{x}\) (and this is true no matter the value of \(x\) selected). A very special number indeed! The function \(e^{x}\) is also the inverse of the natural logarithm function \(ln(x)\) (which could be written \(log_{e}(x)\), but almost never is… however, some texts call it \(log(x)\), which can cause confusion with \(log_{10}(x)\) which is also referred to as \(log(x)\) in some texts, especially engineering — so be cautious when you see a \(log(x)\) that you know what the context is). The solution to a logarithm is the exponent required for the given base to give the value that is having its logarithm taken. For example, \(log_{10}(100) = 2\) because \(10^{2} = 100\). Similarly, \(log_{10}(1000) = 3\) since \(10^{3} = 1000\). Etc.. Using \(e\) as the base is common in physics and mathematics because of its special properties as detailed above where \(de^{x}/dx = e^{x}\). Just looking at that, the original differential equation, (\ref{diff}) should come to mind because it too has the derivative of some function of a variable on the left, and that same function (no derivative) on the right, so because of \(e\)’s special ability, it makes a good candidate for the form of the function \(N(t)\). However, before jumping to (correct) conclusions, a few more words are needed on the tools before progressing.

To integrate polynomials, e.g. \(ax^3 + bx^2 + cx + g = 0\), we do the opposite of differentiation, and many of the same properties can be leveraged (again, check out Relativistic energy expansion if this doesn’t make sense re: the exponentiation, the distributive property, or pulling out the constants). Using that as an example:

\[\int (ax^3 + bx^2 + cx + g)\,dx = \int (ax^3 + bx^2 + cx^1 + gx^0)\,dx =\]

\[\int ax^3\,dx + \int bx^2\,dx + \int cx^1\,dx + \int gx^0\,dx =\]

\[a \int x^3\,dx + b \int x^2\,dx + c \int x^1\,dx + g \int x^0\,dx\]

Here, these are called indefinite integrals because no integration range has been specified, we’re just going through the integration step but not the evaluation step. To integrate a polynomial, add one to the exponent, then divide the value by the reciprocal of the new exponent (i.e. 1 over the starting exponent plus one). Because it is the inverse of taking the derivative, if we take the derivative of the result of the integral, we should get the original equation back. For example, using \(f(x) = x^{2}\), \(\int x^2\,dx = 1/3 x^3 + C\) where \(C\) is called the integration constant. If we take the derivative of the result of that integration (remembering that the derivative, or rate of change, of a constant is 0), we get \(d/dx(1/3 x^3 + C) = 1/3\cdot 3\cdot x^2\cdot 1 + 0 = x^2\), which is what we started with. Continuing with the above polynomial example:

\[a \int x^3\,dx + b \int x^2\,dx + c \int x^1\,dx + g \int x^0\,dx =\]

\[\left(a\cdot \frac{1}{3 + 1}\cdot x^{3 + 1} + C\right) + \left(b\cdot \frac{1}{2 + 1}\cdot x^{2 + 1} + B\right)\]

\[ + \left(c\cdot \frac{1}{1 + 1}\cdot x^{1 + 1} + C\right) + \left(g\cdot \frac{1}{0 + 1}\cdot x^{0 + 1} + G\right) = \]

\[\frac{a}{4}x^4 + \frac{b}{3}x^3 + \frac{c}{2}x^2 + gx^1 + hx^0 = \frac{a}{4}x^4 + \frac{b}{3}x^3 + \frac{c}{2}x^2 + gx + h\]

where the integration constants are all added together and called \(h\): \(h = A + B + C + G\). The final integration constant is usually written without the \(x^0 = 1\) polynomial term, but it is good to remember that it’s there because if that equation is further integrated, then you would need to add one to that exponent and it would become \(x^{0 + 1} = x\). Taking the derivative of the result of the integration gives:

\[\frac{d}{dx}\left(\frac{a}{4}x^4 + \frac{b}{3}x^3 + \frac{c}{2}x^2 + gx + h\right) =\]

\[\frac{d}{dx}\left(\frac{a}{4}x^4\right) + \frac{d}{dx}\left(\frac{b}{3}x^3\right) + \frac{d}{dx}\left(\frac{c}{2}x^2\right) + \frac{d}{dx}\left(gx\right) + \frac{d}{dx}\left(h\right) =\]

\[\frac{a}{4}\frac{d}{dx}\left(x^4\right) + \frac{b}{3}\frac{d}{dx}\left(x^3\right) + \frac{c}{2}\frac{d}{dx}\left(x^2\right) + g\frac{d}{dx}\left(x\right) + 0 =\]

\[\frac{a}{4}\cdot 4\cdot x^{4 – 1}\cdot 1 + \frac{b}{3}\cdot 3\cdot x^{3 – 1}\cdot 1 + \frac{c}{2}\cdot 2\cdot x^{2 – 1}\cdot 1 + gx^{1 – 1}\cdot 1 + 0 =\]

\[ax^3 + bx^2 + cx^1 + gx^0 = ax^3 + bx^2 + cx + g\]

which is the function we started out with. Differentiating and integrating with functions like \(e^x\) are different than polynomials. To differentiate, multiply the function with \(e\) by the derivative of the exponent of the function with \(e\), and just keep the function with \(e\) “as is”:

\[\frac{d}{dx}\left(e^{\oplus(x)}\right) = \frac{d}{dx}\left(\oplus(x)\right)\cdot e^{\oplus(x)}\]

So, as an example,

\[\frac{d}{dx}\left(3e^{2x^3}\right) = 3\frac{d}{dx}\left(2x^3\right)\cdot e^{2x^3} = 3\cdot6x^2e^{2x^3} = 18x^2e^{2x^3}\]

To integrate the result \(18x^2e^{2x^3}\) actually takes a lot of work (even though we know the answer, pretend we don’t). Before tackling that, let’s head back to \(e^{x}\) and figure out how to integrate it. Since we know that the derivative of \(e^{x}\) is \(e^{x}\), then one might surmise (since integration is the antiderivative) that the integral of \(e^{x}\) is also \(e^{x}\), and that would be nearly correct, except we need to include the integration constant:

\[\int e^{x} = e^{x} + C\]

To check, take the derivative of the answer:

\[\frac{d}{dx}\left(e^{x} + C\right) = e^{x} + 0 = e^{x}\]

We will need this result later for this example! The issue with \(18x^2e^{2x^3}\) is there are two functions of \(x\) in the equation. There is an equivalent in integration to the product rule in differentiation called integration by parts[3], but trying to use it in this case would not actually help matters (the problem is the exponent of \(e\) which is not just a simple variable, which is the only function involving \(e\) that we know how to integrate yet). Integration by parts is incredibly useful, but it isn’t the tool we want here. Without going into it, if you try integration by parts on something and it gets messier rather than easier, then back out and try something else. In this case, integration by substitution is going to do the trick. Only experience will indicate which method to use, although I did give it a try with integration by parts and realized there was no hope in solving it that way! The hint is above as well: we only know how to integrate one function involving \(e\), and that is \(e^x\). To use the only tool we have, we need the equation to look something like: \(f(x) = e^{x}\). To do that, we can substitute \(y = 2x^3\), so:

\[\int 18x^2e^{2x^3}\,dx = \int 18x^2e^y\,dx\]

That moved us forward a bit, but the stuff in front of \(e^y\) is more than we can cope with. The trick is that with \(e^y\) in the equation, we don’t want to integrate over \(dx\), we want to integrate over \(dy\)! Well,

\[\frac{dy}{dx} =\frac{d}{dx}(2x^3) = 6x^2\rightarrow\frac{dy}{dx} = 6x^2\rightarrow dy = 6x^2\,dx \rightarrow dx =\frac{dy}{6x^2}\]

Substituting \(dx\) into the equation above gives:

\[\int 18x^2e^y\,dx = \int 18x^2e^y\frac{dy}{6x^2} = \int \frac{18x^2e^y}{6x^2}dy = \int 3 e^y dy = 3 \int e^y dy\]

Ahhh! This we can do, and then substitute back the value we assigned to \(y\) once done:

\[3 \int e^y = 3 e^y + C = 3 e^{2x^3} + C\]

which is the answer we were aiming for. So,

\[\int 18x^2e^{2x^3}\,dx = 3 e^{2x^3} + C\]

The thing that I, and many, find frustrating, is that differentiation is a grind, but it’s purely mechanical; however for integration, only experience will give hints at how to arrive directly at an answer (and whether an analytical answer is possible at all). Sometimes, several techniques need to be tried one after the other before figuring out what might or might not work. Intuition and creativity play a big role in integration. As a final note, because we know the original equation, we also know that \(C = 0\) here to match our initial conditions.

Before we can solve the decay or growth differential equation, we need to know how to deal with the inverse of the \(e^x\) function, which is the \(ln(x)\) function. In the case of the derivative:

\[\frac{d}{dx} ln(x) = \frac{1}{x}\]

The proof is beyond the scope of this note, but there are plenty of derivations of that online (there is a textual proof here, and there is a video of a more intuitive proof here). The integral (antiderivative) is then:

\[\int \frac{1}{x} dx = ln\left|x\right| + A\]

This can be extended to more complicated functions as well:

\begin{equation}
\int \frac{1}{f(x)} dx = ln\left|f(x)\right| + C\label{int1overx}\tag{2}
\end{equation}

The reason for the absolute value bars is also beyond the scope of what I’m doing here (there are again lots of resources online), but we’re going to need that last result.

Now we can solve the differential equation to find the form of the function \(N(t)\), which will tell us how it changes with time (given some initial condition, which we called \(N_{initial}\) above, but is \(N(t)\) at time \(t = 0\), or \(N(0)\)). The first thing that needs to be done is to make sure that all the variables to do with \(N\) are on one side of the equation, and that all the variables to do with \(t\) (or don’t have a dependency on either) are on the other. This is a process known as separation of variables. We start with:

\[dN(t) = -\Gamma N(t) dt \rightarrow \frac{dN(t)}{N(t)} = -\Gamma dt\]

Because it’s a linear operator, the integral of one side of the equation is equal to the integral of the other side, so we can now work with the left side separate from the right side. The left side only has variables related to \(N\), so:

\[\int\frac{dN(t)}{N(t)} = \int\frac{1}{N(t)}dN(t)\]

which looks like equation (\ref{int1overx}), but with \(N(t)\) instead of \(f(x)\). Therefore, we know:

\[\int\frac{1}{N(t)}dN(t) = ln\left|N(t)\right| + A\]

But \(N(t) \geq 0\) since we can’t have a negative count of something (at least in the context we’re examining… for instance, we cannot have less than 0 undecayed nuclei in a sample), we can get rid of the absolute value operator:

\[ln\left|N(t)\right| + A = ln\left(N(t)\right) + A\label{soln1}\tag{3}\]

Turning now to the right hand side, which only has variables related to \(t\):

\[\int -\Gamma dt = \, – \Gamma \int t^{0} dt = \, – \Gamma \frac{1}{0 + 1}t^{0 + 1} + B = \, – \Gamma t + B \]

and we have (letting the constants \(A – B = C\)):

\[ln\left(N(t)\right) + A = \, – \Gamma t + B\rightarrow ln\left(N(t)\right) + C = \, – \Gamma t \]

Since \(e^{x}\) and \(ln(x)\) are inverse functions of each other, this hints on how to get rid of the vexing logarithm statement on the left hand side of the integrated equation. Raising \(e\) to the power of each side:

\[e^{ln\left(N(t)\right) + C} = e^{- \Gamma t} \rightarrow e^{ln\left(N(t)\right)}e^{C} = e^{- \Gamma t}\]

since \(\diamond^{a}\cdot\diamond^{b} = \diamond^{a + b}\) and visa versa. We can now use the property that \(e^{ln(x)} = x\) (since \(e\) and \(ln(x)\) are inverse functions of each other) to write:

\[N(t)\cdot e^{C} = e^{- \Gamma t}\rightarrow N(t) = \frac{1}{e^{C}}e^{- \Gamma t}\]

But we don’t (at this moment) know the value of the integration constant \(C\), so we can’t really do much with the equation yet. Here, we need to look for either boundary conditions (some physical limitation on the extent of the system, like the sides of a box), or initial conditions (which is a kind of boundary condition, but usually time-related). Here, we can ask, what was the condition of the system at some arbitrary point in time that we label as \(t = 0\)? Just substituting:

\[N(0) = \frac{1}{e^{C}}e^{- \Gamma\cdot 0}\rightarrow N(0) = \frac{1}{e^{C}}\]

So now we know the value of \(1/e^{C}\)! It is just the value of \(N_{initial}\), which is the instantaneous number of things we’re counting at the time when we start to measure changes in the system, which is \(N(t = 0) = N(0)\). Substituting \(N(0)\) for \(1/e^{C}\):

\[N(t) = N(0) e^{- \Gamma t}\]

Which is the solution for the differential equation we set out to find. Remember, by convention \(\Gamma\) is positive, so putting the negative sign in front of it makes this a decay equation. To make it a growth equation, drop the negative sign.

As a final word, a note on something that has gotten me in trouble more than once: indefinite versus definite integrals (I will talk about definite integrals in another post). When doing something like the above, definite integrals won’t work. Why? Because the equation we’re looking for is an instantaneous value (\(N(t)\) here) for a given parameter (\(t\) in this case), where an integral is over a range (again, in this case, it would be a period of time). To demonstrate this, doing the definite integral of equation (\ref{soln1}) from 0 to \(t\) gives the solution:

\[\int_0^t\frac{dN(t)}{N(t)} = ln\left.\left(N(t)\right)\right|_{N(t) = t} + C \, – ln\left.\left(N(t)\right)\right|_{N(t) = 0} \, – C = ln(t) – ln(0)\]

This is wrong, don’t do it! The problem, amongst other things, is that \(ln(0) = \infty\), which means that the equation is non-sensical. Since there is nothing special about \(t = 0\), that the equation blows up at that point is a good indication to me that I’ve done something wrong. I certainly did that sort of thing enough times when I was learning that it is one of the reasons why I did this particular example.

If you have any feedback, or questions, please leave a comment so others can see it. I will try to answer if I can. If you think this was valuable to you, please leave a comment or send me an email at to dafriar23@gmail.com. If you want to use this in some way yourself (other than for personal use to learn), please contact me at the given email address and we’ll see what we can do.

© 2018 James Botte. All rights reserved.

This entry was posted in Mathematics, Pedantic, Physics. Bookmark the permalink.

Leave a Reply

Your email address will not be published. Required fields are marked *