View source code
							
							
						
								Display the source code in std/numeric.d from which this
								page was generated on github.
							
						
							Report a bug
							
						
								If you spot a problem with this page, click here to create a
								Bugzilla issue.
							
						
							
								Improve this page
							
							
					
								Quickly fork, edit online, and submit a pull request for this page.
								Requires a signed-in GitHub account. This works well for small changes.
								If you'd like to make larger changes you may want to consider using
								local clone.
							
						Function std.numeric.jensenShannonDivergence
Computes the Jensen-Shannon divergence between a and b, which is the sum (ai * log(2 * ai / (ai + bi)) + bi * log(2 *
bi / (ai + bi))) / 2. The base of logarithm is 2. The ranges are
assumed to contain elements in [0, 1]. Usually the ranges are
normalized probability distributions, but this is not required or
checked by jensenShannonDivergence. If the inputs are normalized,
the result is bounded within [0, 1]. The three-parameter version
stops evaluations as soon as the intermediate result is greater than
or equal to limit.
						
				CommonType!(ElementType!Range1,ElementType!Range2) jensenShannonDivergence(Range1, Range2)
				(
				
				  Range1 a,
				
				  Range2 b
				
				)
				
				if (isInputRange!Range1 && isInputRange!Range2 && is(CommonType!(ElementType!Range1, ElementType!Range2)));
				
				
				CommonType!(ElementType!Range1,ElementType!Range2) jensenShannonDivergence(Range1, Range2, F)
				(
				
				  Range1 a,
				
				  Range2 b,
				
				  F limit
				
				)
				
				if (isInputRange!Range1 && isInputRange!Range2 && is(typeof(CommonType!(ElementType!Range1, ElementType!Range2)
				Example
import stdAuthors
Andrei Alexandrescu, Don Clugston, Robert Jacques, Ilya Yaroshenko
License
					Copyright © 1999-2024 by the D Language Foundation | Page generated by ddox.