mirror of
https://github.com/mfocko/blog.git
synced 2024-11-28 15:26:13 +01:00
586 lines
18 KiB
Markdown
586 lines
18 KiB
Markdown
---
|
||
id: bf
|
||
slug: /paths/bf-to-astar/bf
|
||
title: BF
|
||
description: |
|
||
Solving the shortest path problem with a naïve approach that turns into
|
||
something.
|
||
tags:
|
||
- cpp
|
||
- brute force
|
||
- bellman ford
|
||
- dynamic programming
|
||
last_update:
|
||
date: 2024-01-01
|
||
---
|
||
|
||
## Basic idea
|
||
|
||
We will _ease in_ with our own algorithm to find the shortest path. We will
|
||
start by thinking about the ways we can achieve that. If we didn't have the `*`
|
||
cells, we could've easily run a BFS[^1] and be done with it. Maybe it is a good
|
||
place to start, or isn't, there is only one way to find out though.
|
||
|
||
_How does the BFS work?_ We know the vertex where we start and we know the
|
||
vertex we want to find the shortest path to. Given this knowledge we
|
||
incrementally visit all of our neighbours and we do that over and over until the
|
||
destination is found[^2]. Could we leverage this somehow?
|
||
|
||
## Naïve approach
|
||
|
||
Well, we could probably start with all vertices being _unreachable_ (having the
|
||
highest possible price) and try to improve what we've gotten so far until there
|
||
are no improvements. That sounds fine, we shall implement this. Since we are
|
||
going on repeat, we will name this function `bf()` as in _brute-force_, cause it
|
||
is trying to find it the hard way:
|
||
|
||
```cpp
|
||
const static std::vector<vertex_t> DIRECTIONS =
|
||
std::vector{std::make_pair(0, 1), std::make_pair(0, -1),
|
||
std::make_pair(1, 0), std::make_pair(-1, 0)};
|
||
|
||
auto bf(const graph& g, const vertex_t& source, const vertex_t& destination)
|
||
-> int {
|
||
// ‹source› must be within the bounds
|
||
assert(g.has(source));
|
||
|
||
// ‹destination› must be within the bounds
|
||
assert(g.has(destination));
|
||
|
||
// we need to initialize the distances
|
||
std::vector<std::vector<int>> distances(
|
||
g.height(), std::vector(g.width(), graph::unreachable()));
|
||
|
||
// ‹source› destination denotes the beginning where the cost is 0
|
||
auto [sx, sy] = source;
|
||
distances[sy][sx] = 0;
|
||
|
||
// now we need to improve the paths as long as possible
|
||
bool improvement_found;
|
||
do {
|
||
// reset the flag at the beginning
|
||
improvement_found = false;
|
||
|
||
// go through all of the vertices
|
||
for (int y = 0; y < g.height(); ++y) {
|
||
for (int x = 0; x < g.width(); ++x) {
|
||
// skip the cells we cannot reach
|
||
if (distances[y][x] == graph::unreachable()) {
|
||
continue;
|
||
}
|
||
|
||
// go through the neighbours
|
||
auto u = std::make_pair(x, y);
|
||
for (const auto& [dx, dy] : DIRECTIONS) {
|
||
auto v = std::make_pair(x + dx, y + dy);
|
||
auto cost = g.cost(u, v);
|
||
|
||
// if we can move to the cell and it's better, relax¹ it
|
||
if (cost != graph::unreachable() &&
|
||
distances[y][x] + cost < distances[y + dy][x + dx]) {
|
||
distances[y + dy][x + dx] = distances[y][x] + cost;
|
||
improvement_found = true;
|
||
}
|
||
}
|
||
}
|
||
}
|
||
} while (improvement_found);
|
||
|
||
return distances[destination.second][destination.first];
|
||
}
|
||
```
|
||
|
||
:::info Relaxation
|
||
|
||
I have made a brief mention of the relaxation in the comment in the code. You've
|
||
been probably thought that **relaxation of an edge** means that you found
|
||
a better solution to the problem.
|
||
|
||
In general it is an approximation technique that _reduces_ the problem of
|
||
finding the path `u → x1 → … → xn → v` to subproblems
|
||
`u → x1, x1 → x2, …, xn → v` such that the sum of the costs of each step is
|
||
**minimal**.
|
||
|
||
:::
|
||
|
||
### Correctness
|
||
|
||
_Is our solution correct?_ It appears to be correct… We have rather complicated
|
||
map and our algorithm has finished in an instant with the following output:
|
||
|
||
```
|
||
Normal cost: 1
|
||
Vortex cost: 5
|
||
Graph:
|
||
#############
|
||
#..#..*.*.**#
|
||
##***.....**#
|
||
#..########.#
|
||
#...###...#.#
|
||
#..#...##.#.#
|
||
#..#.*.#..#.#
|
||
#D...#....#.#
|
||
########*.*.#
|
||
#S..........#
|
||
#############
|
||
Cost: 22
|
||
```
|
||
|
||
If you have a better look at the map, you will realize that the cost `22` is the
|
||
one path skipping the `*` cells, since they cost more than going around.
|
||
|
||
We can play around a bit with it. The `*` cells can be even vortices that pull
|
||
you in with a negative price and let you _propel_ yourself out :wink: Let's
|
||
change their cost to `-1` then. Let's check what's the fastest path to the cell.
|
||
|
||
```
|
||
Normal cost: 1
|
||
Vortex cost: -1
|
||
Graph:
|
||
#############
|
||
#..#..*.*.**#
|
||
##***.....**#
|
||
#..########.#
|
||
#...###...#.#
|
||
#..#...##.#.#
|
||
#..#.*.#..#.#
|
||
#D...#....#.#
|
||
########*.*.#
|
||
#S..........#
|
||
#############
|
||
```
|
||
|
||
And we're somehow stuck… The issue comes from the fact that _spinning around_ in
|
||
the vortices allows us to lower the cost infinitely. That's why after each
|
||
iteration there is still a possibility to lower the cost, hence the algorithm
|
||
doesn't finish. _What can we do about this?_
|
||
|
||
:::tip
|
||
|
||
This algorithm is correct as long as there are no negative loops, i.e. ways how
|
||
to lower the cost infinitely. Therefore we can also just lay a precondition that
|
||
requires no negative loops to be present.
|
||
|
||
:::
|
||
|
||
### Fixing the infinite loop
|
||
|
||
Our issue lies in the fact that we can endlessly lower the cost. Such thing must
|
||
surely happen in some kind of a loop. We could probably track the relaxations
|
||
and once we spot repeating patterns, we know we can safely terminate with _some_
|
||
results at least.
|
||
|
||
This approach will not even work on our 2D map, let alone any graph. Problem is
|
||
that the _negative loops_ lower the cost in **each** iteration and that results
|
||
in lowering of the costs to the cells that are reachable from the said loops.
|
||
That's why this problem is relatively hard to tackle, it's not that easy to spot
|
||
the repeating patterns algorithmically.
|
||
|
||
On the other hand, we can approach this from the different perspective. Let's
|
||
assume the worst-case scenario (generalized for any graph):
|
||
|
||
> Let $K_n$ be complete graph. Let $P$ be the shortest path from $v_1$ to $v_n$
|
||
> such that $P$ has $n - 1$ edges, i.e. the shortest path between the two chosen
|
||
> vertices visits all vertices (not necessarily in order) and has the lowest
|
||
> cost.
|
||
>
|
||
> In such scenario assume the worst-case ordering of the relaxations (only one
|
||
> _helpful_ relaxation per iteration). In this case, in each iteration we find
|
||
> the next edge on our path $P$ as the last. This means that we need
|
||
> $\vert V \vert - 1$ iterations to find the shortest path $P$.
|
||
>
|
||
> Because we have laid $P$ as the shortest path from $v_1$ to $v_n$ and it
|
||
> visits all vertices, its prefixes are the shortest paths from $v_1$ to any
|
||
> other vertex in our graph.
|
||
>
|
||
> Therefore, we can safely assume that any relaxation after $\vert V \vert - 1$
|
||
> iterations, is the effect of a negative loop in the graph.
|
||
|
||
_How can we leverage this?_ We will go through the edges only as many times as
|
||
cells we have. Let's adjust the code to fix the looping:
|
||
|
||
```cpp
|
||
auto bf_finite(const graph& g, const vertex_t& source,
|
||
const vertex_t& destination) -> int {
|
||
// ‹source› must be within the bounds
|
||
assert(g.has(source));
|
||
|
||
// ‹destination› must be within the bounds
|
||
assert(g.has(destination));
|
||
|
||
// we need to initialize the distances
|
||
std::vector<std::vector<int>> distances(
|
||
g.height(), std::vector(g.width(), graph::unreachable()));
|
||
|
||
// ‹source› destination denotes the beginning where the cost is 0
|
||
auto [sx, sy] = source;
|
||
distances[sy][sx] = 0;
|
||
|
||
// now we only iterate as many times as cells that we have
|
||
for (int i = g.height() * g.width(); i > 0; --i) {
|
||
// go through all of the vertices
|
||
for (int y = 0; y < g.height(); ++y) {
|
||
for (int x = 0; x < g.width(); ++x) {
|
||
// skip the cells we cannot reach
|
||
if (distances[y][x] == graph::unreachable()) {
|
||
continue;
|
||
}
|
||
|
||
// go through the neighbours
|
||
auto u = std::make_pair(x, y);
|
||
for (const auto& [dx, dy] : DIRECTIONS) {
|
||
auto v = std::make_pair(x + dx, y + dy);
|
||
auto cost = g.cost(u, v);
|
||
|
||
// if we can move to the cell and it's better, relax¹ it
|
||
if (cost != graph::unreachable() &&
|
||
distances[y][x] + cost < distances[y + dy][x + dx]) {
|
||
distances[y + dy][x + dx] = distances[y][x] + cost;
|
||
}
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
return distances[destination.second][destination.first];
|
||
}
|
||
```
|
||
|
||
And we get the following result:
|
||
|
||
```
|
||
Normal cost: 1
|
||
Vortex cost: -1
|
||
Graph:
|
||
#############
|
||
#..#..*.*.**#
|
||
##***.....**#
|
||
#..########.#
|
||
#...###...#.#
|
||
#..#...##.#.#
|
||
#..#.*.#..#.#
|
||
#D...#....#.#
|
||
########*.*.#
|
||
#S..........#
|
||
#############
|
||
Cost: -236
|
||
```
|
||
|
||
The negative cost means that there is a way to _propel_ ourselves via some
|
||
vortices. Let's adjust the cost of _vortices_ back to the original `5` and check
|
||
whether our modified algorithm works as it did before. And it surely does yield
|
||
the `22` as before.
|
||
|
||
:::tip Refactoring
|
||
|
||
You can definitely notice some _deep nesting_ in our code, to counter this
|
||
phenomenon I will convert the looping over `x` and `y` to one variable that can
|
||
be decomposed to `x` and `y`. It is a very common practice when working with 2D
|
||
arrays/lists to represent them as 1D. In our case:
|
||
|
||
```
|
||
i : 0 → width * height - 1
|
||
x = i % width
|
||
y = i / width
|
||
```
|
||
|
||
:::
|
||
|
||
## Bellman-Ford
|
||
|
||
If you have ever attended any Algorithms course that had path-finding in its
|
||
syllabus, you probably feel like you've seen the algorithm above before[^3]… And
|
||
yes, the first algorithm I have proposed is a very dumb version of the
|
||
_Bellman-Ford_ algorithm, it's dumb, because it loops :wink: After our “looping”
|
||
prevention we got to the point that is almost the _Bellman-Ford_ with the one
|
||
exception that it doesn't report whether there are any negative cycles, it just
|
||
ends.
|
||
|
||
Let's have a look at a proper implementation of the Bellman-Ford algorithm:
|
||
|
||
```cpp
|
||
auto bellman_ford(const graph& g, const vertex_t& source)
|
||
-> std::vector<std::vector<int>> {
|
||
// ‹source› must be within the bounds
|
||
assert(g.has(source));
|
||
|
||
// we need to initialize the distances
|
||
std::vector<std::vector<int>> distances(
|
||
g.height(), std::vector(g.width(), graph::unreachable()));
|
||
|
||
// ‹source› destination denotes the beginning where the cost is 0
|
||
auto [sx, sy] = source;
|
||
distances[sy][sx] = 0;
|
||
|
||
// now we only iterate as many times as cells that we have
|
||
for (int i = g.height() * g.width(); i > 0; --i) {
|
||
// go through all of the vertices
|
||
for (int v = g.height() * g.width() - 1; v >= 0; --v) {
|
||
int y = v / g.width();
|
||
int x = v % g.width();
|
||
|
||
// skip the cells we cannot reach
|
||
if (distances[y][x] == graph::unreachable()) {
|
||
continue;
|
||
}
|
||
|
||
// go through the neighbours
|
||
auto u = std::make_pair(x, y);
|
||
for (const auto& [dx, dy] : DIRECTIONS) {
|
||
auto v = std::make_pair(x + dx, y + dy);
|
||
auto cost = g.cost(u, v);
|
||
|
||
// if we can move to the cell and it's better, relax¹ it
|
||
if (cost != graph::unreachable() &&
|
||
distances[y][x] + cost < distances[y + dy][x + dx]) {
|
||
distances[y + dy][x + dx] = distances[y][x] + cost;
|
||
}
|
||
}
|
||
}
|
||
}
|
||
|
||
// now we check for the negative loops
|
||
bool relaxed = false;
|
||
for (int v = g.height() * g.width() - 1; !relaxed && v >= 0; --v) {
|
||
int y = v / g.width();
|
||
int x = v % g.width();
|
||
|
||
// skip the cells we cannot reach
|
||
if (distances[y][x] == graph::unreachable()) {
|
||
continue;
|
||
}
|
||
|
||
// go through the neighbours
|
||
auto u = std::make_pair(x, y);
|
||
for (const auto& [dx, dy] : DIRECTIONS) {
|
||
auto v = std::make_pair(x + dx, y + dy);
|
||
auto cost = g.cost(u, v);
|
||
|
||
// if we can move to the cell and it's better, relax¹ it
|
||
if (cost != graph::unreachable() &&
|
||
distances[y][x] + cost < distances[y + dy][x + dx]) {
|
||
relaxed = true;
|
||
std::cerr << "Found a negative loop\n";
|
||
break;
|
||
}
|
||
}
|
||
}
|
||
|
||
return distances;
|
||
}
|
||
```
|
||
|
||
And if we run it with our negative cost of entering vortices:
|
||
|
||
```
|
||
[Bellman-Ford] Found a negative loop
|
||
[Bellman-Ford] Cost: -240
|
||
```
|
||
|
||
### On the Bellman-Ford
|
||
|
||
You might be surprised that we have managed to iterate from a brute-force method
|
||
that mindlessly tries to find a better path until there are no better paths left
|
||
all the way to the Bellman-Ford algorithm.
|
||
|
||
I always say that Bellman-Ford is a _smart_ brute-force. BF is also an algorithm
|
||
that leverages _dynamic programming_. You might wonder how can it utilize DP if
|
||
it is “technically” a brute-force technique. Table with the shortest distances
|
||
is the thing that makes it DP.
|
||
|
||
> I might not know the shortest path yet, but I do remember all of other paths,
|
||
> and I can improve them, if possible.
|
||
|
||
That's where the beauty of both _dynamic programming_ and _relaxing_ gets merged
|
||
together and does its magic.
|
||
|
||
Proof of the correctness of the BF is done via induction to the number of
|
||
iterations. I would suggest to try to prove the correctness yourself and
|
||
possibly look it up, if necessary.
|
||
|
||
Also the correctness of the BF relies on the conclusion we've made when fixing
|
||
the infinite-loop on our naïve BF solution.
|
||
|
||
## Time complexity
|
||
|
||
Let's have a short look at the time complexities of the presented algorithms:
|
||
|
||
1. naïve approach: given that there are no negative loops, we are bound by the
|
||
worst-case ordering of the relaxations which results in
|
||
|
||
$$
|
||
\mathcal{O}(\vert V \vert \cdot \vert E \vert)
|
||
$$
|
||
|
||
2. our naïve approach with the fixed count of iterations instead of the
|
||
`do-while` loop results in the same worst-case time complexity:
|
||
|
||
$$
|
||
\Theta(\vert V \vert \cdot \vert E \vert)
|
||
$$
|
||
|
||
3. and finally the well-known Bellman-Ford's algorithm time complexity:
|
||
$$
|
||
\Theta(\vert V \vert \cdot \vert E \vert)
|
||
$$
|
||
|
||
## Small refactor
|
||
|
||
Since we are literally copy-pasting the body of the loops just for the sake of
|
||
relaxing, we can factor that part out into a separate function:
|
||
|
||
```cpp
|
||
static auto _check_vertex(const graph& g,
|
||
std::vector<std::vector<int>>& distances, int v,
|
||
bool check_only = false) -> bool {
|
||
bool improvement_found = false;
|
||
|
||
// unpack the vertex coordinates
|
||
int y = v / g.width();
|
||
int x = v % g.width();
|
||
|
||
// skip the cells we cannot reach
|
||
if (distances[y][x] == graph::unreachable()) {
|
||
return false;
|
||
}
|
||
|
||
// go through the neighbours
|
||
auto u = std::make_pair(x, y);
|
||
for (const auto& [dx, dy] : DIRECTIONS) {
|
||
auto v = std::make_pair(x + dx, y + dy);
|
||
auto cost = g.cost(u, v);
|
||
|
||
// if we can move to the cell and it's better, relax¹ it
|
||
if (cost != graph::unreachable() &&
|
||
distances[y][x] + cost < distances[y + dy][x + dx]) {
|
||
if (check_only) {
|
||
return true;
|
||
}
|
||
|
||
distances[y + dy][x + dx] = distances[y][x] + cost;
|
||
improvement_found = true;
|
||
}
|
||
}
|
||
|
||
return improvement_found;
|
||
}
|
||
```
|
||
|
||
This function can be also used for checking the negative loops at the end of the
|
||
BF by using the `check_only` parameter to signal that we just want to know if
|
||
there would be any edge relaxed instead of performing the relaxation itself.
|
||
|
||
Then we can also see the differences between the specific versions of our
|
||
path-finding algorithms in a clear way:
|
||
|
||
```cpp
|
||
auto bf(const graph& g, const vertex_t& source, const vertex_t& destination)
|
||
-> int {
|
||
// ‹source› must be within the bounds
|
||
assert(g.has(source));
|
||
|
||
// ‹destination› must be within the bounds
|
||
assert(g.has(destination));
|
||
|
||
// we need to initialize the distances
|
||
std::vector<std::vector<int>> distances(
|
||
g.height(), std::vector(g.width(), graph::unreachable()));
|
||
|
||
// ‹source› destination denotes the beginning where the cost is 0
|
||
auto [sx, sy] = source;
|
||
distances[sy][sx] = 0;
|
||
|
||
// now we need to improve the paths as long as possible
|
||
bool improvement_found;
|
||
do {
|
||
// reset the flag at the beginning
|
||
improvement_found = false;
|
||
|
||
// go through all of the vertices
|
||
for (int v = g.height() * g.width() - 1; v >= 0; --v) {
|
||
improvement_found = _check_vertex(g, distances, v) || improvement_found;
|
||
}
|
||
} while (improvement_found);
|
||
|
||
return distances[destination.second][destination.first];
|
||
}
|
||
|
||
auto bf_finite(const graph& g, const vertex_t& source,
|
||
const vertex_t& destination) -> int {
|
||
// ‹source› must be within the bounds
|
||
assert(g.has(source));
|
||
|
||
// ‹destination› must be within the bounds
|
||
assert(g.has(destination));
|
||
|
||
// we need to initialize the distances
|
||
std::vector<std::vector<int>> distances(
|
||
g.height(), std::vector(g.width(), graph::unreachable()));
|
||
|
||
// ‹source› destination denotes the beginning where the cost is 0
|
||
auto [sx, sy] = source;
|
||
distances[sy][sx] = 0;
|
||
|
||
// now we only iterate as many times as cells that we have
|
||
for (int i = g.height() * g.width(); i > 0; --i) {
|
||
// go through all of the vertices
|
||
for (int v = g.height() * g.width() - 1; v >= 0; --v) {
|
||
_check_vertex(g, distances, v);
|
||
}
|
||
}
|
||
|
||
return distances[destination.second][destination.first];
|
||
}
|
||
|
||
auto bellman_ford(const graph& g, const vertex_t& source)
|
||
-> std::vector<std::vector<int>> {
|
||
// ‹source› must be within the bounds
|
||
assert(g.has(source));
|
||
|
||
// we need to initialize the distances
|
||
std::vector<std::vector<int>> distances(
|
||
g.height(), std::vector(g.width(), graph::unreachable()));
|
||
|
||
// ‹source› destination denotes the beginning where the cost is 0
|
||
auto [sx, sy] = source;
|
||
distances[sy][sx] = 0;
|
||
|
||
// now we only iterate as many times as cells that we have
|
||
for (int i = g.height() * g.width(); i > 0; --i) {
|
||
// go through all of the vertices
|
||
for (int v = g.height() * g.width() - 1; v >= 0; --v) {
|
||
_check_vertex(g, distances, v);
|
||
}
|
||
}
|
||
|
||
// now we check for the negative loops
|
||
for (int v = g.height() * g.width() - 1; v >= 0; --v) {
|
||
if (_check_vertex(g, distances, v, true)) {
|
||
std::cerr << "[Bellman-Ford] Found a negative loop\n";
|
||
break;
|
||
}
|
||
}
|
||
|
||
return distances;
|
||
}
|
||
|
||
```
|
||
|
||
---
|
||
|
||
:::tip
|
||
|
||
You might've noticed that I've been using abbreviation _BF_ interchangeably for
|
||
both _Bellman-Ford_ and _brute-force_. If you think about the way Bellman-Ford
|
||
algorithm works, you should realize that in the worst case it's updating the
|
||
shortest path till there no shorter path exists, so in a sense, you could really
|
||
consider it a brute-force algorithm.
|
||
|
||
:::
|
||
|
||
[^1]: [Breadth-first search](https://en.wikipedia.org/wiki/Breadth-first_search)
|
||
[^2]:
|
||
Of course, there are some technicalities like keeping track of the visited
|
||
vertices to not taint the shortest path by already visited vertices.
|
||
|
||
[^3]: or at least you should, LOL
|