Dual boot UEFI Windows 10/Linux Mint 19.3 system


After 2 years without Windows, I decided to assemble a desktop with dual Windows 10 / Linux Mint 19.3 boot. I remember making Windows/Ubuntu dual boot system was a pain in the ass since UEFI replaced good old BIOS; but times changed, and I was surprised how smooth and easy is making UEFI dual boot system now.

I used ASUS PRIME 365M-K motherboard for my desktop; I think all modern ASUS motherboards have the same UEFI support.

You install Windows 10 first. I used brand new HDD, and I allocated half of HDD for Windows and left the rest of HDD unpartitioned. Windows created several partitions in the allocated area, the most important for dual boot is EFI partition which Windows labelled “System” or like that; it will be relabelled as “EFI” subsequently by Linux Mint installation program.

Now you install Linux Mint 19.3. When asked how to install Linux Mint, I have chosen default “Install alongside Windows” option. This is a preferable option unless you want a third system on your HDD.

If you google “Dual boot Windows Ubuntu UEFI” now, you probably find recommendations to disable Secure boot and do other strange things. I believe this staff is outdated for modern hardware and latest Ubuntu or Mint versions, and all you need is just run installation programs.

Monica Cellio drama


Disclaimer: I don’t care about StackOverflow or StackExchange sites. I can use them for asking or answering questions, but I have no respect for the sites; if some day they disappear, I would not regret.

I quite understand now how this crowdsourcing business works. They created an attractive platform for users to ask questions and to answer questions, without paying a cent to both askers and answerers. Ok, no offence, this is just business. But at least people expect some respect for what they are doing for SO and SE money makers. Question askers expect some respect for their questions; question answerers expect some respect for solving problems of question askers, and moderators expect some respect for what they do to support the platform. No money, just respect; but nobody who works for SO/SE business for free is granted respect.

The Monica Cellio drama has shown that even diamond moderators have no respect from whose who work for money; SO and SE business use it’s users like a toilet paper; and if they think it is profitable to blame a user for what he/she did not do, they will blame. No personal, just business.

Quantum Information and Quantum Noise


The term quantum information is really a synonym of the term quantum state, only viewed at a different angle. If a qubit has state

|\psi\rangle =\alpha|0\rangle + \beta|1\rangle

then the complex numbers \alpha and \beta are (up to a global phase) the quantum information stored in the qubit; instead of saying “qubit has state |\psi\rangle“, we can say “qubit store information |\psi\rangle

If we have a single qubit, we can’t pull down quantum information from the qubit into our classical world. We need many qubits storing identical information to measure \alpha and \beta with some precision; the more precision we want, the more qubits we need. We can’t also obtain \alpha and \beta by measuring in a single basis only, we need to measure in two different bases at least.

Pure states

|\psi\rangle =\alpha|0\rangle + \beta|1\rangle

are not the most general qubit’s states. The most general states are called mixed states and are described by density matrices. Density matrix \rho of a pure state |\psi\rangle is

\rho =|\psi\rangle\langle\psi|=\begin{pmatrix}\alpha \\ \beta\end{pmatrix}(\alpha^* \beta^*)=\begin{pmatrix} |\alpha|^2 & \alpha\beta^* \\ \alpha^*\beta &|\beta|^2 \end{pmatrix}

A valid density matrix must be Hermitian, positive semidefinite and have trace 1; vice versa, any Hermitian and positive semidefinite matrix with trace 1 is a valid density matrix.

An example of a density matrix of a non-pure state:

\rho =p_0|0\rangle\langle 0|+p_1|1\rangle\langle 1|=\begin{pmatrix} p_0 & 0 \\ 0 &p_1 \end{pmatrix}

where p_0 and p_1 are real, p_0\geqslant 0, p_1\geqslant 0, and p_0 + p_1 = 1

Non-pure states are also called noisy states. In the classical data processing noise is always bad and we should always get rid of the noise to obtain clean data. As we will see soon, the quantum noise is more interesting.

What does it mean that a qubit has mixed state

\rho =\begin{pmatrix} p_0 & 0 \\ 0 &p_1 \end{pmatrix}

Does it mean that a qubit really has a pure state |0\rangle or |1\rangle, it just happened that we don’t know it exactly and model our incomplete knowledge by probabilities p_0 and p_1 ?

Well, this is subtle. It is possible that a qubit has a pure state that we don’t know exactly, but it is also possible that a qubit has no pure state.

What is important to understand, the above said is not some philosophy. The difference between the two cases has mathematical consequences in quantum mechanics, and in the end of the day the difference can be (statistically) measured.

Let us consider two-qubit EPR state

|\psi_{1}\rangle =\frac{1}{\sqrt{2}}(|00\rangle + |11\rangle)

The density matrix of the state is

\rho_{1} =\frac{1}{2}(|00\rangle + |11\rangle)(\langle 00| + \langle 11|)=\frac{1}{2}\begin{pmatrix} 1 & 0 & 0 & 1 \\  0 & 0 & 0 & 0 \\  0 & 0 & 0 & 0 \\  1 & 0 & 0 & 1 \end{pmatrix}

Each qubit in the pair the has probability 1/2 of being in state |0\rangle or state |1\rangle.

We can construct mixed state with the same property:

\rho_{2} =\frac{1}{2}(|00\rangle\langle 00| + |11\rangle\langle 11|)=\frac{1}{2}\begin{pmatrix} 1 & 0 & 0 & 0 \\  0 & 0 & 0 & 0 \\  0 & 0 & 0 & 0 \\  0 & 0 & 0 & 1 \end{pmatrix}

In both cases the individual qubits have identical noisy states (only the two-qubit states are different). It looks like the EPR state and the second state are statistically identical, but John Bell using clever argument has shown that they are not: EPR state violates so-called Bell’s inequalities while the second state does not.

It is funny that the Bell’s discovery happened about 30 years after the related questions were raised in the famous EPR paper by Einstein himself, and all prominent physicists of the time were aware of the EPR paper; the discovery has waited 30 years for John Bell.

It is common knowledge today that the density matrix formalism mathematically captures physical difference of the states: two states with the same density matrix are physically indistinguishable, and two states with different density matrices are physically distinguishable; it seems like nobody understood this before the Bell’s discovery.

Another term to discuss quantum noise is coherence (the term coherence may have different meanings in physics, be aware). If an initially pure qubit’s state evolves into a noisy state, we say that the qubit has lost coherence. But there are different ways to loose coherence. The coherence of an individual qubit in a multiqubit system may leak into other qubits of the system so that the whole multiqubit system preserves coherence. This is controllable and reversible loss of coherence. If the multiqubit system is quantum computer, this process is an important part of quantum computation. In the quantum algorithms the individual qubits loose coherence at intermediate step and restore coherence (with high probability at least) in the end, before the final measurement.

The main problem with building quantum computers is that coherence uncontrollably leaks into environment, and the whole multiqubit system looses coherence; since we can’t control environment on the quantum level, the loss of coherence is irreversible. This process introduces really bad kind of quantum noise which destroys quantum computation.

On the Delphi *.dcp files


A *.dcp file is created when Delphi builds a package. It always bothered me that the default *.dcp file location does not take into account the build configuration. For example, the default location for Win32 platform is $(BDSCOMMONDIR)\DCP; if you build a package in DEBUG configuration and then in RELEASE configuration, the release *.dcp overwrites the debug *.dcp.

But the point is: *.dcp files are needed to develop dependent packages. If you develop a single package, say PackageA, you can forget about the PackageA.dcp file and where it is located; it is simply not needed.

If you develop 2 packages, say PackageA and PackageB, and the PackageB depends on the PackageA (requires PackageA), then you can’t even fully create PackageB without specifying the location of PackageA.dcp file. In this case the solution is: rely on the default *.dcp file location and don’t change it. The build workflow should be as follows:

  • Build PackageA in DEBUG configuration
  • Build PackageB in DEBUG configuration
  • Build PackageA in RELEASE configuration
  • Build PackageB in RELEASE configuration

The release versions of *.dcp files has overwritten the debug versions, but the *.dcp files served their purpose: the compiled debug version of PackageB depends on compiled debug version of PackageA, the compiled release version of PackageB depends on compiled release version of PackageA; the *.dcp files are not needed anymore.

Introduction to Python for Delphi programmers


Why Delphi programmer needs Python at all? The main reason is: Python ecosystem is much bigger and much more active than Delphi ecosystem; there are many useful and actively developing projects in Python, much more than you can find in Delphi. Currently I am dabbling in a nice MkDocs documentation generator and planning to use it for documenting my project.

The first thing you need to understand about Python is project isolation. In Delphi you can add units, packages, etc to a project and it does not affect other projects; the Delphi projects are well isolated. If you simply install Python and start developing projects, you quickly discover that there is no project isolation at all; if you add python package for one project, the package becomes available for all projects. It is quite stunning when you encounter it for the first time, but there is a solution: you need a separate Python installation for each project. These separate installations are called environments. The idea is: you have one global Python installation with the sole purpose of creating environments; you never use the global installation for developing projects. When you start a new project you create a new environment, and when later on you add packages to the environment it does not affect global installation or other environments.

There are several ways to create environments, the way I use is Conda package manager.

I am using Python on Linux Mint, and Linux Mint already has Python installed (on Windows you probably need to install Python first). But this Python belongs to the system; the system installed it for its own purposes, and trying to modify the system Python is bad idea. Good news are: Python is Conda installation requirement, and you have it.

Go to Miniconda download page and download Miniconda for your system. There are two installer versions, based on Python 2 and Python 3 – use the one based on Python 3; it does not matter much; by default Python 3 version creates Python 3 environments. Don’t install Anaconda – if you want to play with Anaconda install it later into environment.

Open Terminal window, go to the download folder and run the downloaded installer; accept the default settings and answer “yes” to the installer questions. After the installation is completed close the Terminal window and open it again. Now you have Python installed in your home folder; to check it run which python command:

  serg@TravelMate ~ $ which python

This is global Python installation that will be used to create environments.

To play with MkDocs I’ve created environment named mkdocs:

  conda create -n mkdocs pip

pip is Python package manager that will be included in the new environment. Conda documentation may say that you need not pip because you can install everything using Conda itself; I believe this is too good to be true, and prefer to have pip in any environment, and it is better to install pip right in the environment create command.

Now we need to activate the newly created environment; after the activation check that we have different python installed in the environment:

  serg@TravelMate ~ $ source activate mkdocs
  (mkdocs) serg@TravelMate ~ $ which python

The next step is to upgrade pip in the environment:

  pip install --upgrade pip

and finally install MkDocs package into the environment:

  pip install mkdocs

Check that mkdocs is installed:

  (mkdocs) serg@TravelMate ~ $ mkdocs --version
  mkdocs, version 1.0.2 from /home/serg/miniconda3/envs/mkdocs/lib/python3.7/site-packages/mkdocs (Python 3.7)

If you are not going to do more for now, deactivate the environment

  (mkdocs) serg@TravelMate ~ $ source deactivate
  serg@TravelMate ~ $ 

or just close the Terminal window.

Bitbucket in Russia


Bitbucket in Russia fell innocent victim of random wars the russian government is waging in Internet. The connection to Bitbucket was worsening for months and is nearly broken now. I’ve found the currently working solution here (in Russian). It boils down to editing /etc/hosts file, for example

sudo -i nano /etc/hosts

adding the line bitbucket.org

and rebooting.

It worked for me, but I am thinking now of making backup repository on Github.

FastMM4 FullDebugMode Setup Guide

  1. Download the latest FastMM4 release; currently it is version 4.992
  2. Copy the precompiled FullDebugMode dll’s from the downloaded archive to the folders where Windows can find them. I recommend to do the following:
    • Manually create ‘c:\Software’ folder (or name the folder as you like);
    • Create ‘c:\Software\DLL’ subfolder (for 32-bit dll’s) and ‘c:\Software\DLL64’ subfolder (for 64-bit dll’s);
    • Add the paths ‘c:\Software\DLL’ and ‘c:\Software\DLL64’ to the system PATH variable (open Start menu, begin typing ‘environment …’ and choose ‘Edit the system environment variables’)
    • Copy ‘FastMM_FullDebugMode.dll’ to ‘c:\Software\DLL’
    • Copy ‘FastMM_FullDebugMode64.dll’ to ‘c:\Software\DLL64’
  3. Create FastMM4 folder; let it be ‘c:\Software\FastMM4’
  4. Copy ‘*.pas’ files and ‘FastMM4Options.inc’ from the main folder of the downloaded archive to ‘c:\Software\FastMM4’; do not copy the subfolders of the archive, they are just not needed here
  5. Enable FullDebugMode (open ‘FastMM4Options.inc’, find {.$define FullDebugMode} entry and remove the dot, {.$define FullDebugMode} –> {$define FullDebugMode})

Now the system-wide setup is completed, and we can test apps.

Delphi 10.2 Tokyo:

  • Create new console project and open ‘Project Options’ dialog;
  • Select ‘All configurations’ target;
  • Add ‘c:\Software\FastMM4’ to the search path and click ‘OK’.
  • Add ‘FastMM4’ as the first item in the ‘uses’ clause and add a simple memory leak
program Project1;

{$R *.res}


procedure MemLeak;
var P: PByte;

  GetMem(P, 10);

    on E: Exception do
      Writeln(E.ClassName, ': ', E.Message);
//  Readln;

If you run the app you get FastMM4 error message

Lazarus 1.8.4:

  • Unfortunately FastMM4 does not support FPC on Windows and even a simplest code example does not compile.

Linux Mint 18 and UEFI boot manager


Recently I was installing Linux Mint on a new Acer laptop with UEFI boot manager. The laptop came with preinstalled “Endless OS” which turned out to be useless because of absence of a package manager. I’ve created Linux Mint 18.3 bootable USB using Rufus, and chosen “GPT partition scheme for UEFI”. I did not make any BIOS changes before installation, and the installation procedure worked fine; I’ve chosen “Erase the entire disk” option during installation. After the installation, when I tried to launch the newly installed OS, I’ve got “No Bootable Device” screen. After several “try and error” iterations I came with the following solution:

  • During installation, do not check “Install 3rd party drivers …” option – the drivers will not be installed anyway; they can be installed later using Driver Manager.
  • After the installation is over, boot into BIOS settings (on Acer laptops by pressing F2 key after switching power on) and set the EFI file created during installation as trusted. The procedure is written in much detail here, only in my case the file turned out to be grubx64.efi
  • The system should boot now, but without some drivers. The worst thing in my case appeared after installing Oracle’s Virtual Box – Virtual Box installs it’s own kernel driver, and Virtual Box did not work because the driver did not work. So you need to enable driver installation now, and it is done by disabling “Secure Boot” option in BIOS.

Crosscompiling with Lazarus 1.8 on Linux Mint 18.3


Suppose you installed Lazarus 1.8 on Linux Mint 18.3 as described before and want to build Windows binaries (well we don’t like Windows, but the users do 🙂 ). I’ve found the useful piece of information about crosscompiling here and started to adopt it to my 32-bit Linux system.

The first step is to build and install the crosscompiler. For the demonstration purposes I decided to build Win32 crosscompiler (the Win64 case should not be much different).

Lazarus 1.8 uses FPC version 3.0.4, so to perform the first step open up Terminal and execute following commands:

# Navigate to the fpc source folder.
cd /usr/share/fpcsrc/3.0.4

# Compile the cross-compiler.
sudo make clean all OS_TARGET=win32 CPU_TARGET=i386

# Install the cross-compiler.
sudo make crossinstall OS_TARGET=win32 CPU_TARGET=i386 INSTALL_PREFIX=/usr

# Link the cross-compiler and place the link where Lazarus can see it.
sudo ln -sf /usr/lib/fpc/3.0.4/ppcross386 /usr/bin/ppcross386

Now let us open Lazarus and create a simple GUI project. I dropped a button on the form and written OnClick handler:

procedure TForm1.Button1Click(Sender: TObject);
  ShowMessage('Hello World!');

I created subfolder /Projects/Lazarus/HelloWorldGUI in my home folder and saved the project under the name HelloW. You can build and run the project and see it works.

Now time to configure the project for Win32 crosscompiling. Open Project Options dialog (Ctrl-Shift-F11). You should see this:


Check Build Modes checkbox:


and click the ellipsis button; a new dialog appears:


Click plus button to create a new build mode, and name it Win32:


Now we should tell Lazarus to compile Win32 code for this build mode. Select Config and Target on the left panel and select Win32 as target OS:


Now you can build the project. I simply clicked green Run button and obtained the warning window:


Well I guess one can’t expect to debug Win32 binary on Linux. Still the work was done, and I’ve found HelloW.exe file in the project folder. To be sure I’ve copied the file on 64-bit Windows 10 system, and It Works!


Installing Lazarus 1.8 on Linux Mint 18.3


Yesterday I come across my old 32-bit Celeron laptop with 2Gb memory and broken battery and decided to install Linux on it. First I tried Centos 7.5, then Ubuntu 16.04, and finally settled down on Mint 18.3 Cinnamon which I liked most. After playing a little with the OS I decided to install the brand new Lazarus 1.8. I spent a couple of hours searching for a clear installation guide in Internet; I scanned tons of outdated nonsense before I finally found what I was looking for.

My Linux Mint installation was fresh, and I need not purge it from previous fpc/lazarus installations. So I started from downloading three 32-bit .deb packages from sourceforge.

After downloading my Downloads folder looked so:


The next recommended step is to check hashes of the files. I launched Terminal and changed to Downloads directory:


Now time to install the packages. As recommended in the linked guide, I typed in Terminal

sudo apt install ./fpc_3.0.4-1_i386.deb
sudo apt install ./fpc-src_3.0.4-1_i386.deb
sudo apt install ./lazarus-project_1.8.0.0_i386.deb
sudo apt-mark hold fpc fpc-src lazarus lazarus-project

And that is all! After opening the Cinnamon menu I’ve found this: