Notes on EPR Paradox, Entanglement and Bell’s Inequality

0

1. Elements of reality (aka hidden variables) vs uncertainty.

fig1Suppose we have a spin-1/2 particle (ex electron) in the state |n+\rangle (fig. 1). Quantum Mechanics (QM) states that if we measure the spin along z direction we get |z+\rangle with probability cos^2(\theta/2) and |z-\rangle with probability sin^2(\theta/2). There is no way to tell which result will be obtained; the uncertainty of measurement of non-commuting observables is a fundamental law of nature.

Einstein, Podolsky and Rosen (EPR) say – no, there is nothing fundamental here. The particle has elements of reality which determine the measurement result in a unique deterministic way; the particle is either of type (z+,n+) (with probability cos^2(\theta/2)), or of type (z-,n+) (with probability sin^2(\theta/2)), but not of both types at the same time. Statistically the results of spin measurements exactly reproduce the QM predictions.

2. Entanglement.

QM rules out the “elements of reality” because generally they cannot be measured simultaneously; if we have a particle in state |n+\rangle and measure spin along z direction the original state |n+\rangle “collapses” either to |z+\rangle or to |z-\rangle and does not “exist” anymore.

EPR say – no, this does not seem to be true because we can measure a particle’s spin without acting on the particle. Suppose we have the singlet state of two spin-1/2 particles A and B:

\psi = \frac{1}{\sqrt{2}}(|+\rangle_A\otimes|-\rangle_B - |-\rangle_A\otimes|+\rangle_B)

this state is fully invariant under rotations, so it can be written as

\psi = \frac{1}{\sqrt{2}}(|z+\rangle_A\otimes|z-\rangle_B - |z-\rangle_A\otimes|z+\rangle_B)

or

\psi = \frac{1}{\sqrt{2}}(|n+\rangle_A\otimes|n-\rangle_B - |n-\rangle_A\otimes|n+\rangle_B)

We can create an experimental setup such that a spin measurement on the A particle does not affect the B particle in any way, and if we measure spin of the particle A along some direction then spin of the particle B is always opposite, so in the end we measure spin of the particle B without disturbing it.

N.B. Prior to the measurement the particles A and B had no (pure) states of their own, only the common 2-particle singlet state. The measurement does not really “collapse” a state of the particle B because it had no (pure) state before the measurement but instead “create” a (pure) state of the particle B.

3. EPR paradox.

fig2The challenge of EPR is to reproduce all statistical QM predictions using the classical concept of “elements of reality”. As an example let us consider the same singlet state and measure spins of the particles A and B along two different directions z_1 and z_2 (fig.2). QM predicts that the probability of obtaining |z_1+\rangle for the particle A and |z_2+\rangle for the particle B is equal to P(++)=\frac{1}{2}sin^2(\theta/2); the other possibilities are P(+-)=\frac{1}{2}cos^2(\theta/2), P(-+)=\frac{1}{2}cos^2(\theta/2), P(--)=\frac{1}{2}sin^2(\theta/2), all add up to 1.

EPR say: in this experiment we have following statistical mixture of pairs of particles:

  1. a fraction \frac{1}{2}cos^2(\theta/2) of pairs having the particle A of type (z_1+,z_2+) and the particle B of type (z_1-,z_2-)
  2. a fraction \frac{1}{2}sin^2(\theta/2) of pairs having the particle A of type (z_1+,z_2-) and the particle B of type (z_1-,z_2+)
  3. a fraction \frac{1}{2}sin^2(\theta/2) of pairs having the particle A of type (z_1-,z_2+) and the particle B of type (z_1+,z_2-)
  4. a fraction \frac{1}{2}cos^2(\theta/2) of pairs having the particle A of type (z_1-,z_2-) and the particle B of type (z_1+,z_2+)

It can be seen that the mixture reproduces the QM predictions whether we measure the spins along the same direction or along different directions. For example, if we measure the spins of both particles along z_1 then the probability of obtaining + for the particle A and - for the particle B is equal to the sum of fractions (1) and (2), giving \frac{1}{2}cos^2(\theta/2) + \frac{1}{2}sin^2(\theta/2)=\frac{1}{2} as one should expect; if we measure spin of the particle A along z_1 and spin of the particle B along z_2, then the probability of obtaining + for the particle A and - for the particle B is equal to the fraction (1), giving \frac{1}{2}cos^2(\theta/2), again consistent with QM prediction.

It turns out that it is not easy to propose an experiment disproving the EPR paradox.

4. Bell’s inequality.

fig3Let us take the same 2-particle singlet state, but now measure spins along 3 directions (fig.3). Using the EPR’s “elements of reality” framework, the statistical mixture consists of 8 fractions of pairs of the particles A and B:

fraction particle A particle B
P_1 (z_1+,z_2+,z_3+) (z_1-,z_2-,z_3-)
P_2 (z_1+,z_2+,z_3-) (z_1-,z_2-,z_3+)
P_3 (z_1+,z_2-,z_3+) (z_1-,z_2+,z_3-)
P_4 (z_1+,z_2-,z_3-) (z_1-,z_2+,z_3+)
P_5 (z_1-,z_2+,z_3+) (z_1+,z_2-,z_3-)
P_6 (z_1-,z_2+,z_3-) (z_1+,z_2-,z_3+)
P_7 (z_1-,z_2-,z_3+) (z_1+,z_2+,z_3-)
P_8 (z_1-,z_2-,z_3-) (z_1+,z_2+,z_3+)

We choose 2 directions z_i and z_j and measure spin of the particle A along z_i and spin of the particle B along z_j.
Let us denote the probability of obtaining (+) for spin of the particle A and (+) for spin of the particle B as P_{ij}. We have from the table above:

  • P_{12}=P_3+P_4
  • P_{13}=P_2+P_4
  • P_{32}=P_3+P_7

Using little algebra

P_{13}+P_{32}=P_2+P_4+P_3+P_7=P_{12}+P_2+P_7 \ge P_{12},

or finally

P_{12} \le P_{13} + P_{32}

But this inequality is easily violated in QM. Let us consider a planar configuration of the axis z_i and choose \theta_{13}=\theta_{32}=\theta (fig.3), then the inequality becomes

\frac{1}{2}sin^2(\theta) \le sin^2(\theta/2)

which is wrong for any \theta < \pi/2

The result can be formulated as the Bell’s theorem:


No physical theory of local hidden variables can ever reproduce all of the predictions of quantum mechanics.


 

Think twice before raising exception in constructor

16

Recently prolific Delphi writer Nick Hodges declared a war on the use of nil pointers. It is arguable whether nil pointers usage is bad or not, but the point is the “Guard Pattern” proposed is really an antipattern. Here is a simple demo:

program Project11;

{$APPTYPE CONSOLE}

uses
  SysUtils, Classes;

type
  TMyClass = class
  private
    FList: TStringList;
  public
    constructor Create(AList: TStringList);
    destructor Destroy; override;
  end;

constructor TMyClass.Create(AList: TStringList);
begin
  if AList = nil then
  begin
    raise Exception.Create('Don''t you dare pass me a nil reference!);
  end;
  FList:= AList;
  FList.Add('TMyClass instance created');
end;

destructor TMyClass.Destroy;
begin
  FList.Add('TMyClass instance destroyed');
  inherited Destroy;
end;

procedure Test;
begin
  TMyClass.Create(nil);
end;

begin
  try
    Test;
  except
    on E: Exception do
      Writeln(E.ClassName, ': ', E.Message);
  end;
end.

If you understand how exceptions raised in a constructor work you see that the above code raises 2 exceptions: the first is the “wanted” exception in the constructor, the second is the “unwanted” access violation in the destructor.

The moral is: if you raise an exception in a constructor be very careful when writing the destructor, cause you need to treat a case of improperly initialized instance.

How to get size of adjacent record’s fields correctly?

1

Suppose we have two record types TRecA and TRecB declared as shown below; in a sense the TRecB type is ‘inherited’ from TRecA.

The goal is to write a general method which clears all TRecB fields which are not ‘inherited’ from TRecA. The code below propose 3 candidate solutions TRecB.ClearX:

    type
      PRecA = ^TRecA;
      TRecA = record
        FieldA1: LongWord;
        FieldA2: Byte;
      end;
    
      PRecB = ^TRecB;
      TRecB = record
        FieldA1: LongWord;
        FieldA2: Byte;
    
        FieldB1: Word;
        FieldB2: LongInt;
        FieldB3: Byte;
        Sentinel: record end;
    
        procedure Clear1;
        procedure Clear2;
        procedure Clear3;
      end;
    
    { TRecB }
    
    procedure TRecB.Clear1;
    begin
      FillChar(FieldB1, SizeOf(TRecB) - SizeOf(TRecA), 0);
    end;
    
    procedure TRecB.Clear2;
    begin
      FillChar(FieldB1,
        SizeOf(TRecB) - Integer(@PRecB(nil)^.FieldB1), 0);
    end;
    
    procedure TRecB.Clear3;
    begin
      FillChar(FieldB1,
        Integer(@PRecB(nil)^.Sentinel) - Integer(@PRecB(nil)^.FieldB1), 0);
    end;

At a first glance all three ClearX methods do the same but they are not. The Clear1 solution is simply wrong while Clear2 and Clear3 are different.

To understand why it is so one need to understand how types’ sizes and alignments are calculated.

Since the 80-bit Extended was kicked out from Delphi, every type’s size is multiple of the type’s alignment. TRecA type has alignment 4 (which is the LongWord type alignment, the biggest alignment among TRecA fields), and the size of the 2nd field is less than 4, so SizeOf(TRecA) = 8.

The alignment of Word type is 2, thus the offset of FieldB1 in TRecB type is 6. That is why Clear1 solution is wrong: it may work correctly for some sets of FieldBX fields, but generally it is incorrect.

Applying the alignment rules to TRecB type we obtain the next layout:

      TRecBLayout = record
        FieldA1: LongWord;
        FieldA2: Byte;
        Filler1: Byte;  
        FieldB1: Word;
        FieldB2: LongInt;
        FieldB3: Byte;
        Filler2: Byte;  
        Filler3: Byte;  
        Filler4: Byte;
      end;  

Filler bytes at the end are required to make TRecB size multiple of TRecB alignment (=4)

So SizeOf(TRecB) = 16, but OffsetOf(TRecB.Sentinel) = 13 (unfortunately Delphi compiler does not support OffsetOf() built-in). That is why Clear2 and Clear3 methods are different: only Clear3 does exactly what is asked – clears ‘non-inherited’ fields; Clear2 also clears filler bytes at the record’s end.

Endianness issue

1

TForge 0.68 is released.

TForge 0.68 is a maintenance release which fixes memory leak in THMAC implementation, fixes endianness issue for non-crypto hash functions such as CRC32 and adds minor improvements to ByteArray type.


Previous releases were returning message digest values for non-crypto hash functions in reversed byte order; the reasoning behind this design was that such function are returning integer values, and on little-endian CPU the byte order is reversed; this is not an issue anymore.

Since version 0.68 all hash functions return message digests in big-endian format, so there is no need to reverse byte order for non-crypto functions. This design is more consistent and also follows “least surprise” rule. When a non-crypto message digest is converted to integer type on a little-endian CPU the byte order is reversed under the hood.

TForge also contains alternative simpler and faster implementations of non-crypto functions in tfUtils.pas unit (TCRC32 and TJenkins1 types) which return message digests as an integer type and thus have no endianness dilemma at all.


Whenever a ByteArray method treats an instance data as an integer, it now assumes big-endian format. The examples are 2 new methods added in ver 0.68 – ByteArray.Incr and ByteArray.Decr. As names suggest the first method increments an instance data, the second decrements; both assumes that the data is an unsigned integer in big-endian format.

ByteArray.Incr method is useful for implementing some crypto primitives such as CTR mode of operation for block ciphers.


Happy New Year!

Dabbling in BASM

6

TForge 0.67 adds SHA-256 algorithm BASM optimization (applies also to SHA-224 which is based on SHA-256). Now the library contains BASM-optimized implementations of MD5, SHA1, SHA224 and SHA256 algorithms for Win32 and Win64 platforms.

Here are some benchmarks; I measured the time required to hash 500 MB of data on my laptop (Win7 64-bits, Intel Core i3 CPU) in milliseconds. Both Pascal and BASM implementations use fully unrolled compression routines. I used default compiler settings.

32-bit compiler (Delphi XE):

Delphi-pascal

Delphi-basm

64-bit compiler (FPC 2.6.2):

fpc64-pascal

fpc64-basm

High Performance Hash Library

1

TForge 0.66 released.

What’s new:

1. More hash algorithms added:

  • SHA224
  • SHA384
  • SHA512

2. MD5 and SHA1 compression routines were rewritten in BASM for CPUX86-WIN32 and CPUX64-WIN64 platforms and times faster now compared to pure pascal code (about 4 times for 32-bit compiler (Delphi XE) and about 3 times for 64-bit compiler (FPC 2.6.2) on my system).

3. Memory allocation bug in HMAC implementation was fixed.

User-friendly Hash Library

9

TForge 0.65 released.

The release features a hash library with fluent coding support. While I was writing the library I was inspired by usability of the Python’s hashlib.

Current release supports:

  • Cryptographic hash algorithms:
    1. MD5
    2. SHA1
    3. SHA256
  • Non-cryptographic hash algorithms:
    1. CRC32
    2. Jenkins-One-At-Time
  • Hash-based MAC algorithm (HMAC)
  • Key derivation algorithms:
    1. PBKDF1
    2. PBKDF2

Let us consider a common problem: calculate MD5 and SHA1 digests of a file. The simplest way to do it is:

program HashFile;

{$APPTYPE CONSOLE}

uses
  SysUtils, Classes, tfTypes, tfHashes;

procedure CalcHash(const FileName: string);
begin
  Writeln('MD5:  ', THash.MD5.UpdateFile(FileName).Digest.ToHex);
  Writeln('SHA1: ', THash.SHA1.UpdateFile(FileName).Digest.ToHex);
end;

begin
  try
    if ParamCount = 1 then begin
      CalcHash(ParamStr(1));
    end
    else
      Writeln('Usage: &gt; HashFile filename');
  except
    on E: Exception do
      Writeln(E.ClassName, ': ', E.Message);
  end;
  Readln;
end.

The above application demonstrates the beauty of fluent coding. The code is compact and clear – you create an instance of a hash algorithm, feed a file data to it, generate resulting digest and convert it to hexadecimal; the instance is freed automatically, no need for explicit call of the Free method.

But the above code is not optimal – it reads a file twice. A better solutions involves some coding:

procedure CalcHash(const FileName: string);
const
  BufSize = 16 * 1024;

var
  MD5, SHA1: THash;
  Stream: TStream;
  Buffer: array[0 .. BufSize - 1] of Byte;
  N: Integer;

begin
  MD5:= THash.MD5;
  SHA1:= THash.SHA1;
  try
    Stream:= TFileStream.Create(FileName,
                                fmOpenRead or fmShareDenyWrite);
    try
      repeat
        N:= Stream.Read(Buffer, BufSize);
        if N &lt;= 0 then Break
        else begin
          MD5.Update(Buffer, N);
          SHA1.Update(Buffer, N);
        end;
      until False;
    finally
      Stream.Free;
    end;
    Writeln('MD5:  ', MD5.Digest.ToHex);
    Writeln('SHA1: ', SHA1.Digest.ToHex);
  finally
    MD5.Burn;
    SHA1.Burn;
  end;
end;

The code also demonstrate the use of Burn method; it is not needed here and could be safely removed with corresponding try/finally block but can be useful in other cases – it destroys all sensitive data in an instance. The use of Burn method is optional – it is called anyway when an instance is freed, but explicit call of the Burn method gives you full control over erasing the sensitive data.

The Free method does not free an instance; it only decrements the instance’s reference count, and since the compiler can create hidden references to the instance the moment when the reference count turns zero and the instance is freed is generally controlled by the compiler.


The use of non-cryptographic hash algorithms has one caveat – since they actually return an integer value the bytes of a digest are reversed. The idiomatic way to get the correct result is to cast the digest to integer type:

  Writeln('CRC32: ',
    IntToHex(LongWord(THash.CRC32.UpdateFile(FileName).Digest), 8));

This issue is fixed in ver. 0.68 – see https://sergworks.wordpress.com/2014/12/27/endianness-issue/


HMAC algorithm generates digest using a cryptographic hash algorithm and a secret key. Here is an example of calculating SHA1-HMAC digest of a file:

procedure SHA1_HMAC_File(const FileName: string;
                         const Key: ByteArray);
begin
  Writeln('SHA1-HMAC: ',
    THMAC.SHA1.ExpandKey(Key).UpdateFile(FileName).Digest.ToHex);
end;

begin
..
  SHA1_HMAC_File(ParamStr(1),
    ByteArray.FromText('My Secret Key'));

Key derivation algorithms generate keys from user passwords by applying hash algorithms. PBKDF1 applies a cryptographic hash algorithm directly, PBKDF2 uses HMAC. Here are usage examples:

procedure DeriveKeys(const Password, Salt: ByteArray);
begin
  Writeln('PBKDF1 Key: ',
    THash.SHA1.DeriveKey(Password, Salt,
                         10000,   // number of rounds
                         16       // key length in bytes
                         ).ToHex);
  Writeln('PBKDF2 Key: ',
    THMAC.SHA1.DeriveKey(Password, Salt,
                         10000,   // number of rounds
                         32       // key length in bytes
                         ).ToHex);
end;

begin
..
  DeriveKeys(ByteArray.FromText('User Password'),
             ByteArray.FromText('Salt'));

Configuration and Installation

The release contains 2 runtime packages TForge and THashes. You should build them, first TForge, next THashes.

For Delphi users:

The packages are in Packages\DXE subfolder (Delphi XE only).

You should make the folder Source\Include available via project’s search path before you can build the packages. To do it open “Project Options” dialog for each package, set “Build Configuration” to “Base” and replace the path to TFL.inc by your path (depends on where you unpacked the downloaded archive):

Include

Now use the project manager and build the packages in “Debug” and “Release” configurations:

build

Optionally, to enable Ctrl-click navigation in the editor, add paths to the source code units to the Browsing path:
Browse

To make the packages available to an application open the application’s “Project Options” dialog, set “Build Configuration” to “Base” and add the next path to search path:

appconfig

For FPC/Lazarus users:

The packages are in Packages\FPC subfolder.

I don’t know much about Lazarus packages. Here are the package configuration options I used:

Laz2

_lazconfig