fix: grammarly

Signed-off-by: Matej Focko <mfocko@redhat.com>
This commit is contained in:
Matej Focko 2022-05-18 21:03:03 +02:00
parent 8709d8031d
commit 0bbcdf0ea3
Signed by: mfocko
GPG key ID: 7C47D46246790496
4 changed files with 135 additions and 193 deletions

View file

@ -10,7 +10,7 @@ In the text and pseudocode we adopt these functions or properties~\cite{wavl}:
\item function $parent(x)$ or property $x.parent$ returns parent of a node; analogically for the left and right children of a node
\item \textit{rank-difference} of \textit{x} is defined as $r(parent(x)) - r(x)$
\item $x$ is an \textit{i-child} if its rank-difference is $i$
\item $x$ is an $(i, j)$-node if its left and right children have $i$ and $j$ rank-differences respectively; ordering of the children does not matter
\item $x$ is an $(i,~j)$-node if its left and right children have $i$ and $j$ rank-differences respectively; ordering of the children does not matter
\end{itemize}
\section{Rules for other trees}
@ -19,14 +19,14 @@ As we have mentioned at the beginning of \hyperref[chap:rank-balanced-trees]{thi
\subsection{AVL tree}\label{chap:avl-rule}
\textbf{AVL Rule}: Every node is (1, 1) or (1, 2).~\cite{wavl}
\textbf{AVL Rule}: Every node is (1,~1) or (1,~2).~\cite{wavl}
In the case of the AVL tree, rank represents height. Here we can notice an ingenious way of using the \textit{(i, j)-node} definition. If we go back to the definition and want to be explicit about the nodes that are allowed with the \textit{AVL Rule}, then we get (1, 1), (1, 2) \textbf{or} (2, 1) nodes. However, it is possible to find implementations of the AVL tree that allow leaning \textbf{to only one side} as opposed to the original requirements given by \textit{Adelson-Velsky and Landis}~\cite{avl}. Forbidding interchangeability of (i, j) with (j, i)-nodes would still yield AVL trees that lean to one side.
In the case of the AVL tree, rank represents height. Here we can notice an ingenious way of using the \textit{(i,~j)-node} definition. If we go back to the definition and want to be explicit about the nodes that are allowed with the \textit{AVL Rule}, then we get (1,~1), (1,~2) \textbf{or} (2,~1)-nodes. However, it is possible to find implementations of the AVL tree that allow leaning \textbf{to only one side} as opposed to the original requirements given by \textit{Adelson-Velsky and Landis}~\cite{avl}. Forbidding interchangeability of (i,~j) with (j,~i)-nodes would still yield AVL trees that lean to one side.
The meaning of the \textit{AVL Rule} is quite simple since rank represents the height in that case. We can draw analogies using the notation used for the AVL trees, where we mark nodes with a trit (or a sign) or use a balance factor. We have two cases to discuss:
\begin{itemize}
\item \textbf{(1, 1) node} represents a tree where both of its subtrees have the same height. In this case, we are talking about the nodes with balance factor $0$ (respectively being signed with a $0$).
\item \textbf{(1, 2) node} represents a tree where one of its subtrees has a bigger height. In this case, we are talking about the nodes with balance factor $-1$ or $1$ (respectively being signed with a $-$ or a $+$).
\item \textbf{(1,~1)-node} represents a tree where both of its subtrees have the same height. In this case, we are talking about the nodes with balance factor $0$ (respectively being signed with a $0$).
\item \textbf{(1,~2)-node} represents a tree where one of its subtrees has a bigger height. In this case, we are talking about the nodes with balance factor $-1$ or $1$ (respectively being signed with a $-$ or a $+$).
\end{itemize}
Example of the AVL tree that uses ranks instead of signs or balance-factors can be seen in \autoref{fig:ranked:avl}.

View file

@ -15,9 +15,9 @@ As mentioned previously, red-black trees are among the most popular implementati
\end{enumerate}
Given this knowledge, we can safely deduce the following relation between the height of the red-black tree and nodes stored in it~\cite{cormen2009introduction}:
\[
\begin{equation}
\log_2{(n + 1)} \leq h \leq 2 \cdot \log_2{(n + 2)} - 2
\]\label{rb-height}
\end{equation}~\label{rb-height}
The lower bound is given by a perfect binary tree, and the upper bound is given by the minimal red-black tree.
@ -42,10 +42,9 @@ BalanceFactor(n) \in \{ -1, 0, 1 \}
In other words, the heights of left and right subtrees of each node differ at most in 1.~\cite{avl}
Similarly, we will deduce the height of the AVL tree from the original paper, by \textit{Adelson-Velsky and Landis}~\cite{avl}:
\[
\begin{equation}
\left( \log_2{(n + 1)} \leq \right) h < \log_{\varphi}{(n + 1)} < \frac{3}{2} \cdot \log_2{(n + 1)}
\]\label{avl-height}
\end{equation}~\label{avl-height}
If we compare the upper bounds for the height of the red-black trees and AVL trees, we can see that AVL rules are more strict than red-black rules, but at the cost of rebalancing. However, in both cases, the rebalancing still takes $\log_2{n}$.

View file

@ -14,7 +14,7 @@
color, %% Uncomment these lines (by removing the %% at the
%% beginning) to use color in the digital version of your
%% document
table, %% The `table` option causes the coloring of tables.
notable, %% The `table` option causes the coloring of tables.
%% Replace with `notable` to restore plain LaTeX tables.
oneside, %% The `twoside` option enables double-sided typesetting.
%% Use at least 120 g/m² paper to prevent show-through.
@ -104,6 +104,14 @@
\usepackage[x11names, svgnames, rgb]{xcolor}
\usepackage{tikz}
\usetikzlibrary{decorations,arrows,shapes}
\usetikzlibrary{trees}
\tikzset{
treenode/.style = {shape=circle, draw, align=center,},
left side node/.style={above left, inner sep=0.1em},
right side node/.style={above right, inner sep=0.1em}
}
\usepackage{graphicx}
\SetKwProg{Fn}{function}{ is}{end}
\SetKwProg{Proc}{procedure}{ is}{end}

View file

@ -7,35 +7,33 @@ Based on the rank rules for implementing red-black tree (as described in \ref{ch
\textbf{Weak AVL Rule}: All rank differences are 1 or 2, and every leaf has rank 0.~\cite{wavl}
Comparing the \textit{Weak AVL Rule} to the \textit{AVL Rule}, we can come to these conclusions:
\begin{itemize}
\item \textit{Every leaf has rank 0} holds with the AVL Rule, since every node is (1, 1) or (1, 2) and rank of a node represents height of its tree. Rank of \textit{nil} is defined as $-1$ and height of tree rooted at leaf is $0$, therefore leaves are (1, 1)-nodes
\item \textit{All rank differences are 1 or 2} does not hold in one specific case, and that is (2, 2)-node, which is allowed in the WAVL tree, but not in the AVL tree. This difference will be explained more thoroughly later on.
\item \textit{Every leaf has rank 0} holds with the AVL Rule since every node is (1,~1) or (1,~2), and the rank of a node represents the height of its tree. The rank of \textit{nil} is defined as $-1$, and the height of the tree rooted at the leaf is $0$; therefore, leaves are (1,~1)-nodes.
\item \textit{All rank differences are 1 or 2} does not hold in one specific case, and that is (2,~2)-node, which is allowed in the WAVL tree, but not in the AVL tree. This difference will be explained more thoroughly later on.
\end{itemize}
\section{Height boundaries}
We have described in \autoref{chap:sb-bst} common self-balanced binary search trees to be able to draw analogies and explain differences between them. Given the boundaries of height for red-black and AVL tree, we can safely assume that the AVL is more strict with regards to the self-balancing than the red-black tree. Let us show how does WAVL fit among them. \textit{Haeupler et al.} present following bounds~\cite{wavl}:
\[ h \leq k \leq 2h \text{ and } k \leq 2 \log_2{n} \]
We have described in \autoref{chap:sb-bst} common self-balanced binary search trees to be able to draw analogies and explain the differences between them. Given the boundaries of height for the red-black and AVL trees, we can safely assume that the AVL is more strict regarding the self-balancing than the red-black tree. Let us show how does WAVL fit among them. \textit{Haeupler et al.}~\cite{wavl} present the following bounds~\cite{wavl}:
\begin{equation}
h \leq k \leq 2h \text{ and } k \leq 2 \log_2{n}
\end{equation}
In those equations we can see $h$ and $n$ in the same context as we used it to lay boundaries for the AVL and red-black trees, but we can also see a new variable $k$, which represents the rank of the tree.
In those equations, we can see $h$ and $n$ in the same context as we used to lay boundaries for the AVL and red-black trees, but we can also see a new variable, $k$, which represents the rank of the tree.
One of the core differences between AVL and WAVL lies in the rebalancing after deletion. Insertion into the WAVL tree is realized in the same way as it would in the AVL tree and the benefit of (2, 2)-node is used during deletion rebalancing.
From the previous 2 statements we can come to 2 conclusions and those are:
One of the core differences between AVL and WAVL lies in the rebalancing after deletion. Insertion into the WAVL tree is realized in the same way as it would in the AVL tree, and the benefit of the (2,~2)-node is used during deletion rebalancing.
From the previous 2 statements, we can come to 2 conclusions, and those are:
\begin{itemize}
\item If we commit only insertions to the WAVL tree, it will always yield a valid AVL tree. In that case it means that the height boundaries are same as of the AVL tree (described in \autoref{avl-height}).
\item If we commit deletions too, we can assume the worst-case scenario where \[ h < 2 \log_2{n} \]
\item If we commit only insertions to the WAVL tree, it will always yield a valid AVL tree. In that case, the height boundaries are the same as of the AVL tree (described in \autoref{avl-height}).
\item If we also commit deletions, we can assume the worst-case scenario where \[ h < 2 \log_2{n} \]
This scenario is close to the upper bound of the height for the red-black trees (described in \autoref{rb-height}).
\end{itemize}
From the two conclusions we can safely deduce that the WAVL tree is in the worst-case scenario as efficient as the red-black tree and in the best-case scenario as efficient as the AVL tree.
\newpage
From the two conclusions, we can safely deduce that the WAVL tree is in the worst-case scenario as efficient as the red-black tree and the best-case scenario as efficient as the AVL tree.
\section{Insertion into the weak AVL tree}
Inserting values into WAVL tree is equivalent to inserting values into regular binary-search tree followed up by rebalancing that ensures rank rules hold. This part can be clearly seen in \autoref{algorithm:wavl:insert}. We can also see there two early returns, one of them happens during insertion into the empty tree and other during insertion of duplicate key, which we do not allow.
Inserting values into the WAVL tree is equivalent to inserting values into a regular binary search tree, followed by rebalancing that ensures rank rules hold. This part can be seen in \autoref{algorithm:wavl:insert}. We can also see there are two early returns. One of them happens during insertion into the empty tree and the other during insertion of a duplicate key, which we do not allow.
\begin{algorithm}
\Proc{$\texttt{insert}(T, key)$}{
@ -60,10 +58,10 @@ Inserting values into WAVL tree is equivalent to inserting values into regular b
\BlankLine
$\wavlInsertRebalance(T, insertedNode)$\;
}
\caption{Insert operation on binary search tree}\label{algorithm:wavl:insert}
\caption{Insert operation on binary search tree.}\label{algorithm:wavl:insert}
\end{algorithm}
In the \autoref{algorithm:wavl:insert} we have also utilized a helper function that is used to find parent of the newly inserted node and also prevents insertion of duplicate keys within the tree. Pseudocode of that function can be seen in \autoref{algorithm:findParentNode}.
In \autoref{algorithm:wavl:insert} we have also utilized a helper function to find the parent of the newly inserted node and prevent the insertion of duplicate keys within the tree. The pseudocode of that function can be seen in \autoref{algorithm:findParentNode}.
\begin{algorithm}
\Fn{$\texttt{findParentNode}(key, node)$}{
@ -84,27 +82,26 @@ In the \autoref{algorithm:wavl:insert} we have also utilized a helper function t
\BlankLine
\Return{node}\;
}
\caption{Helper function that returns parent for newly inserted node}\label{algorithm:findParentNode}
\caption{Helper function that returns parent for newly inserted node.}\label{algorithm:findParentNode}
\end{algorithm}
Rebalancing after insertion in the WAVL tree is equivalent to rebalancing after insertion in the AVL tree. We will start with a short description of the rebalancing within AVL to lay a foundation for analogies and differences compared to the implementation using ranks.
When propagating the error, we can encounter 3 cases (we explain them with respect to propagating insertion from the left subtree, propagation from right is mirrored and role of trits $+$ and $-$ swaps)~\cite{labyrint}:
\begin{enumerate}
\item \textit{Node was marked with $+$.} In this case, heights of left and right subtrees are equal now and node is marked with $0$ and propagation can be stopped.\label{avl:rules:insert:1}
\item \textit{Node was marked with $0$.} In this case, node is marked with $-$, but the height of the tree rooted at the node has changes, which means that we need to propagate the changes further.\label{avl:rules:insert:2}
\item \textit{Node was marked with $-$.} In this case, node would acquire balance-factor of $-2$, which is not allowed. In this situation we decide based on the mark of the node from which we are propagating the insertion in the following way (let $x$ be the node from which the information is being propagated and $z$ the current node marked with $-$):\label{avl:rules:insert:3}
\item \textit{Node was marked with $+$.} In this case, the heights of left and right subtrees are equal now, and node is marked with $0$, and propagation can be stopped.\label{avl:rules:insert:1}
\item \textit{Node was marked with $0$.} In this case, the node is marked with $-$, but the tree's height rooted at the node has changed, so we need to propagate the changes further.\label{avl:rules:insert:2}
\item \textit{Node was marked with $-$.} In this case, the node would acquire balance factor of $-2$, which is not allowed. In this situation, we decide based on the mark of the node from which we are propagating the insertion in the following way (let $x$ be the node from which the information is being propagated and $z$ the current node marked with $-$):\label{avl:rules:insert:3}
\begin{enumerate}
\item $x$ is marked with $-$, then we rotate by $z$ to the right. After that both $z$ and $x$ can be marked with $0$. Height from the point of the parent has not changed, so we can stop the propagation.\label{avl:rules:insert:3a}
\item $x$ is marked with $+$, then we double rotate: first by $x$ to the left and then by $z$ to the right. Here we need to recalculate the balance-factors for $z$ and $x$, where $z$ gets $-$ or $0$ and $x$ gets $0$ or $+$. Node that was a right child to the $x$ before the double-rotation is now marked with $0$ and propagation can be stopped.\label{avl:rules:insert:3b}
\item $x$ is marked with $0$. This case is trivial, since it cannot happen, because we never propagate the height change from a node that acquired sign $0$.
\item $x$ is marked with $-$, then we rotate by $z$ to the right. After that, both $z$ and $x$ can be marked with $0$. The height from the parent's point has not changed; therefore, we can stop the propagation.\label{avl:rules:insert:3a}
\item $x$ is marked with $+$, then we double rotate by $x$ to the left and by $z$ to the right. Here we need to recalculate the balance factors for $z$ and $x$, where $z$ gets $-$ or $0$ and $x$ gets $0$ or $+$. The node that was a right child to the $x$ before the double-rotation is now marked with $0$ and propagation can be stopped.\label{avl:rules:insert:3b}
\item $x$ is marked with $0$. This case is trivial since it cannot happen because we never propagate the height change from a node that acquired sign $0$.
\end{enumerate}
\end{enumerate}
In the following explanation we have to consider that valid nodes in AVL tree implemented via ranks are (1, 1) and (1, 2) and by the time of evaluating rank-differences of parent, they are already affected by the rebalancing done from the inserted leaf.
Rebalancing of the tree is equivalent to rebalancing of AVL tree and is executed in a following way:
In the following explanation, we have to consider that valid nodes in the AVL tree implemented via ranks are (1,~1) and (1,~2). By evaluating rank-differences of the parent, they are already affected by the rebalancing done from the inserted leaf.
Rebalancing the tree is equivalent to the rebalancing of the AVL tree and is executed in the following way that can be seen in \autoref{algorithm:wavl:insertRebalance}.
\begin{algorithm}
\Proc{$\texttt{insertRebalance}(T, node)$}{
\tcp{Handles \hyperref[avl:rules:insert:2]{rule 2}}
@ -124,20 +121,18 @@ Rebalancing of the tree is equivalent to rebalancing of AVL tree and is executed
}
\BlankLine
}
\caption{Algorithm containing bottom-up rebalancing after insertion}\label{algorithm:wavl:insertRebalance}
\caption{Algorithm containing bottom-up rebalancing after insertion.}\label{algorithm:wavl:insertRebalance}
\end{algorithm}
As a first step, which can be seen in \autoref{algorithm:wavl:insertRebalance}, we iteratively check rank-differences of a parent of the current node. As long as it is a (0, 1) or (1, 0) node, we promote it and propagate further. There is an interesting observation to be made about the way \textit{how parent can fulfill such requirement}. And the answer is simple, since we are adding a leaf or are already propagating the change to the root, it means that we have lowered the rank-difference of the parent, therefore it must have been (1, 1) node. From the algorithm used for usual implementations of AVL trees, this step refers to \hyperref[avl:rules:insert:2]{\textit{rule 2}}. After the promotion the rank of the parent becomes (1, 2) or (2, 1) which means that it gets sign $-$ (or $+$ respectively when propagating from the right subtree), which conforms to the usual algorithm.
After this, we might end up in two situations and those are:
As a first step, which can be seen in \autoref{algorithm:wavl:insertRebalance}, we iteratively check the rank-differences of a parent of the current node. As long as it is a (0,~1)-node, we promote it and propagate the error further. There is an interesting observation to be made about the way \textit{how the parent can fulfil such requirements}. Moreover, the answer is simple, since we are adding a leaf or are already propagating the change to the root, we have lowered the rank-difference of the parent; therefore, it must have been (1,~1)-node. From the algorithm used for usual implementations of AVL trees, this step refers to \hyperref[avl:rules:insert:2]{\textit{rule 2}}. After the promotion, the parent becomes (1,~2)-node. That means it gets sign a $-$ (or $+$ respectively when propagating from the right subtree), which conforms to the standard algorithm.
After this, we might end up in two situations, and those are:
\begin{enumerate}
\item Current node is not a 0-child, which means that after propagation and promotions we have gotten to a parent node that is (1, 2) or (2, 1), which refers to the \hyperref[avl:rules:insert:1]{\textit{rule 1}}.
\item Current node is a 0-child, which means that after propagation and promotions we have a node with a parent that is either (0, 2) or (2, 0) node. This case conforms to the \hyperref[avl:rules:insert:3]{\textit{rule 3}} and must be handled further to fix the broken rank rule.
\item The current node is not a 0-child, which means that after propagation and promotions we have gotten to a parent node that is (1,~2), which refers to \hyperref[avl:rules:insert:1]{\textit{rule 1}}.
\item The current node is a 0-child, which means that after propagation and promotions, we have a node with a parent that is either (0,~2)-node. This case conforms to \hyperref[avl:rules:insert:3]{\textit{rule 3}} and must be handled further to fix the broken rank rule.
\end{enumerate}
\hyperref[avl:rules:insert:3]{\textit{Rule 3}} is then handled by a helper function that can be seen in \autoref{algorithm:wavl:fix0Child}.
\begin{algorithm}
\Proc{$\texttt{fix0Child}(T, x, y, rotateToLeft, rotateToRight)$}{
$z \gets x.parent$\;
@ -156,10 +151,10 @@ After this, we might end up in two situations and those are:
$\texttt{demote}(z)$\;
}
}
\caption{Generic algorithm for fixing 0-child after insertion}\label{algorithm:wavl:fix0Child}
\caption{Generic algorithm for fixing 0-child after insertion.}\label{algorithm:wavl:fix0Child}
\end{algorithm}
Here we can see, once again, an interesting pattern. When comparing to the algorithm described above, using the rank representation, we do not need to worry about changing the signs and updating the heights, since by rotating combined with demotion and promotion of the ranks, we are effectively updating the height (represented via rank) of the affected nodes. This observation could be used in \autoref{algorithm:avl:deleteFixNode} and \autoref{algorithm:avl:deleteRotate} where we turned to manual updating of ranks to show the difference.
Here we can see, once again, an interesting pattern. When comparing to the algorithm described above, using the rank representation, we do not need to worry about changing the signs and updating the heights, since by rotating combined with demotion and promotion of the ranks, we are effectively updating the height (represented via rank) of the affected nodes. This observation could be used in \autoref{algorithm:avl:deleteFixNode} and \autoref{algorithm:avl:deleteRotate}, where we turned to manual updating of ranks to show the difference.
\section{Deletion from the weak AVL tree}
@ -183,19 +178,10 @@ Here we can see, once again, an interesting pattern. When comparing to the algor
$\wavlBottomUpDelete(T, z, parent)$\;
}
}
\caption{Initial phase of algorithm for the rebalance after deletion from the WAVL tree}\label{algorithm:wavl:deleteRebalance}
\caption{Initial phase of algorithm for the rebalance after deletion from the WAVL tree.}\label{algorithm:wavl:deleteRebalance}
\end{algorithm}
As described by \textit{Haeupler et al.}~\cite{wavl}, we start the deletion rebalancing by checking for (2, 2) node. If that is the case, we demote it and continue with the deletion rebalancing via \autoref{algorithm:wavl:bottomUpDelete} if we have created a 3-child by the demotion. Demoting the (2, 2) node is imperative, since it enforces part of the \textit{Weak AVL Rule} requiring that leaves have rank equal to zero.
For example consider the following tree in \autoref{fig:wavl:twoElements}. Deletion of key 2 from that tree would result in having only key 1 in the tree with rank equal to 1, which would be (2, 2) node and leaf at the same time. After the demotion of the remaining key, we acquire the tree as shown in \autoref{fig:wavl:twoElementsAfterDelete}
In contrast to the \textit{AVL Rule}, WAVL tree allows us to have (2, 2) nodes present. Therefore we can encounter two key differences during deletion rebalancing:
\begin{enumerate}
\item If anywhere during the deletion rebalancing, \textbf{but not} at the start, we encounter (2, 2) node, we can safely stop the rebalancing process, since rest of the tree must be correct and we have fixed errors on the way to the current node from the leaf.
\item Compared to the AVL tree, during deletion rebalancing we need to fix \textbf{3-child} nodes.
\end{enumerate}
As described by \textit{Haeupler et al.}~\cite{wavl}, we start the deletion rebalancing by checking for (2,~2)-node. If that is the case, we demote it and continue with the deletion rebalancing via \autoref{algorithm:wavl:bottomUpDelete} if we have created a 3-child by the demotion. Demoting the (2,~2)-node is imperative since it enforces part of the \textit{Weak AVL Rule}, requiring that leaves have a rank equal to zero.
\begin{figure}
\centering
@ -209,7 +195,7 @@ In contrast to the \textit{AVL Rule}, WAVL tree allows us to have (2, 2) nodes p
\draw (33.597bp,61.5bp) node {1};
%
\end{tikzpicture}
\caption{WAVL tree containing two elements}
\caption{WAVL tree containing two elements.}
\label{fig:wavl:twoElements}
\end{figure}
@ -226,50 +212,108 @@ In contrast to the \textit{AVL Rule}, WAVL tree allows us to have (2, 2) nodes p
\label{fig:wavl:twoElementsAfterDelete}
\end{figure}
For example, consider the following tree in \autoref{fig:wavl:twoElements}. Deletion of key 2 from that tree would result in having only key 1 in the tree with a rank equal to 1, which would be (2,~2)-node and leaf at the same time. After the demotion of the remaining key, we acquire the tree as shown in \autoref{fig:wavl:twoElementsAfterDelete}
In contrast to the \textit{AVL Rule}, the WAVL tree allows us to have (2,~2) nodes present. Therefore we can encounter two key differences during deletion rebalancing:
\begin{enumerate}
\item If anywhere during the deletion rebalancing, \textbf{but not} at the start, we encounter (2,~2)-node, we can safely stop the rebalancing process since the rest of the tree must be correct, and we have fixed errors on the way to the current node from the leaf.
\item Compared to the AVL tree, during deletion rebalancing, we need to fix \textbf{3-child} nodes.
\end{enumerate}
\begin{algorithm}
\Proc{$\texttt{bottomUpDelete}(T, x, parent)$}{
\If{$x \text{ is not 3-child} \lor parent = nil$}{
\Proc{$\texttt{bottomUpDelete}(T, x, p)$}{
\If{$x \text{ is not 3-child} \lor p = nil$}{
\Return\;
}
\BlankLine
$y \gets nil$\;
\eIf{$parent.left = x$}{
$y \gets parent.right$\;
}{
$y \gets parent.left$\;
}
$y \gets sibling(x)$\;
\BlankLine
\While{$parent \neq nil \land x \text{ is 3-child} \land (y \text{ is 2-child or (2, 2)-node})$}{
\While{$p \neq nil \land x \text{ is 3-child} \land (y \text{ is 2-child or (2,~2)-node})$}{
\If{$y \text{ is not 2-child}$}{
$\texttt{demote}(y)$\;
}
$\texttt{demote}(parent)$\;
$\texttt{demote}(p)$\;
\BlankLine
$x \gets parent$\;
$parent \gets x.parent$\;
\If{$parent = nil$}{
$p \gets x.parent$\;
\If{$p = nil$}{
\Return;
}
\BlankLine
\eIf{$parent.left = x$}{
$y \gets parent.right$\;
}{
$y \gets parent.left$\;
}
$y \gets sibling(x)$\;
}
\BlankLine
\If{$parent \text{ is not (1, 3)-node}$}{
\If{$p \text{ is not (1,~3)-node}$}{
\Return\;
}
$p \gets parent$\;
\eIf{$parent.left = x$}{
\eIf{$p.left = x$}{
$\wavlFixDelete(T, x, p.right, p, false, \texttt{rotateLeft}, \texttt{rotateRight})$\;
}{
$\wavlFixDelete(T, x, p.left, p, true, \texttt{rotateRight}, \texttt{rotateLeft})$\;
}
}
\caption{Propagation of the broken rank rule after deletion from the WAVL tree}\label{algorithm:wavl:bottomUpDelete}
\caption{Propagation of the broken rank rule after deletion from the WAVL tree.}\label{algorithm:wavl:bottomUpDelete}
\end{algorithm}
In all cases of deletion propagation in the WAVL tree, we consider the following setup of nodes that can be seen in \autoref{fig:wavl:deletionPropagation}, where $x$ denotes the node we are propagating the deletion from and $y$ is its sibling.
\begin{figure}
\centering
\begin{tikzpicture}
\node [treenode] {$p$}
child{node[treenode] {$x$}}
child{node[treenode] {$y$}};
\end{tikzpicture}
\caption{Nodes assigned to their respective names during the propagation.}
\label{fig:wavl:deletionPropagation}
\end{figure}
The propagation happens by demoting the parent and possibly sibling of $x$. In the condition of the $while$-loop, we demand that $y$ is either a 2-child or a (2,~2)-node. If the condition of the $while$-loop is fulfilled, then we can encounter the following situations:
\begin{enumerate}
\item \textit{$x$ is a 3-child and $y$ is a 2-child}: in this case, their parent is a (2,~3)-node, and by demoting the parent, we obtain a (1,~2)-node and also propagate the error further to the root. This situation can be seen in \autoref{fig:wavl:deletion:propagationA}.
\item \textit{$x$ is a 3-child and $y$ is a (2,~2)-node}: in this case, their parent is a (1,~3)-node, and by demoting both the parent and $y$, we obtain (1,~2)-node, $y$ becomes (1,~1)-node, and also we propagate the error further to the root. This situation can be seen in \autoref{fig:wavl:deletion:propagationB}.
\item \textit{$x$ is a 3-child, and $y$ is \textbf{both} a 2-child and a (2,~2)-node}: in this case, their parent is a (2,~3)-node, and we demote just the parent, that way, we obtain a (1,~2)-node and also propagate the error further to the root. In this case, the $y$ node stays a (2,~2)-node.
\end{enumerate}
\begin{figure}
\centering
\begin{tabular}{ ccc }
\begin{tikzpicture}
\node [treenode] {$p$}
child{ node[treenode] {$x$} edge from parent node[left side node] {$3$} }
child{ node[treenode] {$y$} edge from parent node[right side node] {$2$} };
\end{tikzpicture}
&
becomes
&
\begin{tikzpicture}
\node [treenode] {$p$}
child{ node[treenode] {$x$} edge from parent node[left side node] {$2$} }
child{ node[treenode] {$y$} edge from parent node[right side node] {$1$} };
\end{tikzpicture}
\end{tabular}
\caption{Deletion propagation in the WAVL tree where $x$ is a 3-child and $y$ is a 2-child.}
\label{fig:wavl:deletion:propagationA}
\end{figure}
\begin{figure}
\centering
\begin{tabular}{ ccc }
\begin{tikzpicture}
\node [treenode] {$p$}
child{ node[treenode] {$x$} edge from parent node[left side node] {$3$} }
child{ node[treenode] {$y$} edge from parent node[right side node] {$1$} };
\end{tikzpicture}
&
becomes
&
\begin{tikzpicture}
\node [treenode] {$p$}
child{ node[treenode] {$x$} edge from parent node[left side node] {$2$} }
child{ node[treenode] {$y$} edge from parent node[right side node] {$1$} };
\end{tikzpicture}
\end{tabular}
\caption{Deletion propagation in the WAVL tree where $x$ is a 3-child and $y$ is a (2,~2)-node.}
\label{fig:wavl:deletion:propagationB}
\end{figure}
\begin{algorithm}
\Proc{$\texttt{fixDelete}(T, x, y, z, reversed, rotateL, rotateR)$}{
@ -300,116 +344,7 @@ In contrast to the \textit{AVL Rule}, WAVL tree allows us to have (2, 2) nodes p
$\texttt{demote}(z)$\;
}
}
\caption{Final phase of the deletion rebalance after deletion from the WAVL tree}\label{algorithm:wavl:fixDelete}
\caption{Final phase of the deletion rebalance after deletion from the WAVL tree.}\label{algorithm:wavl:fixDelete}
\end{algorithm}
\begin{figure}
\centering
\begin{tikzpicture}[>=latex',line join=bevel,scale=0.75,]
%%
\node (Node{1}) at (87.197bp,192.0bp) [draw,ellipse] {1, 2};
\node (Node{0}) at (31.197bp,105.0bp) [draw,ellipse] {0, 1};
\node (Node{2}) at (106.2bp,105.0bp) [draw,ellipse] {2, 0};
\node (Node{-1}) at (31.197bp,18.0bp) [draw,ellipse] {-1, 0};
\draw [->] (Node{1}) ..controls (68.373bp,162.43bp) and (56.68bp,144.68bp) .. (Node{0});
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\draw (68.197bp,148.5bp) node {1};
\draw [->] (Node{1}) ..controls (93.66bp,162.09bp) and (97.18bp,146.34bp) .. (Node{2});
\draw (102.2bp,148.5bp) node {2};
\draw [->] (Node{0}) ..controls (31.197bp,75.163bp) and (31.197bp,59.548bp) .. (Node{-1});
\draw (36.197bp,61.5bp) node {1};
%
\end{tikzpicture}
\caption{WAVL tree with elements inserted in order $(0, 1, 2, -1)$}\label{fig:wavl:deletionA:before}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[>=latex',line join=bevel,scale=0.75,]
%%
\node (Node{1}) at (31.197bp,192.0bp) [draw,ellipse] {1, 2};
\node (Node{0}) at (31.197bp,105.0bp) [draw,ellipse] {0, 1};
\node (Node{-1}) at (31.197bp,18.0bp) [draw,ellipse] {-1, 0};
\draw [->] (Node{1}) ..controls (31.197bp,162.16bp) and (31.197bp,146.55bp) .. (Node{0});
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\draw (36.197bp,148.5bp) node {1};
\draw [->] (Node{0}) ..controls (31.197bp,75.163bp) and (31.197bp,59.548bp) .. (Node{-1});
\draw (36.197bp,61.5bp) node {1};
%
\end{tikzpicture}
\caption{WAVL tree from \autoref{fig:wavl:deletionA:before} after deletion of 2, value is replaced by one of its children}\label{fig:wavl:deletionA:replacing}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[>=latex',line join=bevel,scale=0.75,]
%%
\node (Node{0}) at (70.197bp,105.0bp) [draw,ellipse] {0, 1};
\node (Node{-1}) at (31.197bp,18.0bp) [draw,ellipse] {-1, 0};
\node (Node{1}) at (109.2bp,18.0bp) [draw,ellipse] {1, 2};
\draw [->] (Node{0}) ..controls (57.102bp,75.46bp) and (49.394bp,58.66bp) .. (Node{-1});
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\draw (58.197bp,61.5bp) node {1};
\draw [->] (Node{0}) ..controls (83.292bp,75.46bp) and (91.0bp,58.66bp) .. (Node{1});
\draw (98.197bp,61.5bp) node {-1};
%
\end{tikzpicture}
\caption{rotation by parent}\label{fig:my_label}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[>=latex',line join=bevel,scale=0.75,]
%%
\node (Node{0}) at (70.197bp,105.0bp) [draw=blue,ellipse] {0, 2};
\node (Node{-1}) at (31.197bp,18.0bp) [draw,ellipse] {-1, 0};
\node (Node{1}) at (109.2bp,18.0bp) [draw,ellipse] {1, 2};
\draw [->] (Node{0}) ..controls (57.102bp,75.46bp) and (49.394bp,58.66bp) .. (Node{-1});
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\draw (58.197bp,61.5bp) node {2};
\draw [->] (Node{0}) ..controls (83.292bp,75.46bp) and (91.0bp,58.66bp) .. (Node{1});
\draw (96.197bp,61.5bp) node {0};
%
\end{tikzpicture}
\caption{promotion of y}\label{fig:my_label}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[>=latex',line join=bevel,scale=0.75,]
%%
\node (Node{0}) at (70.197bp,105.0bp) [draw,ellipse] {0, 2};
\node (Node{-1}) at (31.197bp,18.0bp) [draw,ellipse] {-1, 0};
\node (Node{1}) at (109.2bp,18.0bp) [draw=blue,ellipse] {1, 1};
\draw [->] (Node{0}) ..controls (57.102bp,75.46bp) and (49.394bp,58.66bp) .. (Node{-1});
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\draw (58.197bp,61.5bp) node {2};
\draw [->] (Node{0}) ..controls (83.292bp,75.46bp) and (91.0bp,58.66bp) .. (Node{1});
\draw (96.197bp,61.5bp) node {1};
%
\end{tikzpicture}
\caption{demotion of z}\label{fig:my_label}
\end{figure}
\begin{figure}
\centering
\begin{tikzpicture}[>=latex',line join=bevel,scale=0.75,]
%%
\node (Node{0}) at (70.197bp,105.0bp) [draw,ellipse] {0, 2};
\node (Node{-1}) at (31.197bp,18.0bp) [draw,ellipse] {-1, 0};
\node (Node{1}) at (109.2bp,18.0bp) [draw=blue,ellipse] {1, 0};
\draw [->] (Node{0}) ..controls (57.102bp,75.46bp) and (49.394bp,58.66bp) .. (Node{-1});
\definecolor{strokecol}{rgb}{0.0,0.0,0.0};
\pgfsetstrokecolor{strokecol}
\draw (58.197bp,61.5bp) node {2};
\draw [->] (Node{0}) ..controls (83.292bp,75.46bp) and (91.0bp,58.66bp) .. (Node{1});
\draw (96.197bp,61.5bp) node {2};
%
\end{tikzpicture}
\caption{second demotion of z}\label{fig:my_label}
\end{figure}
The final part happens when it is not possible to propagate the error further. In that case, we perform either single or double rotation.