diff --git a/README.md b/README.md index fbda4e9..c3ded3c 100644 --- a/README.md +++ b/README.md @@ -20,7 +20,7 @@ Reading the data from an OSM PDF file and converting it to a graph is done in `src/bin/generate_grid.rs`. The implementation of the spherical point in polygon test is done in `src/polygon.rs` -with the function `Polygon::contains`. +with the function `contains()`. There is one polygon in the graph, for which no valid outside polygon can be found. I did not have the time to investigate this further. @@ -29,7 +29,8 @@ I did not have the time to investigate this further. The code uses the osmpbfreader crate. Sadly this module uses ~10GB of memory to extract the data from the PBF file -with all the coastlines. +with all the coastlines. So far I did not have time to look into what happens +there. ### Point in Polygon @@ -49,9 +50,9 @@ Import and Export from/to a file can be done with the `from_fmi_file` and `write ### Dijkstra Benchmarks -Dijkstras algorithm is implenented in `gridgraph.rs` with `GridGraph::shortest_path`. +The dijkstras algorithm is implenented in `gridgraph.rs`. It uses a Heap to store the nodes. -For details on how to run benchmarks see the benchmarks section at the end. +On details on how to run benchmarks see the benchmarks session at the end. ## Task 6 @@ -75,9 +76,6 @@ I implemented ALT, as described in [1]. Additionally A\* is available with a simple, unoptimized haversine distance as the heuristic. -A\* is implemented in `src/astar.rs` and the heuristics for ALT are implemented -in `src/alt.rs`. - ### Landmarks for ALT currently 3 different landmark generation methods are available @@ -96,7 +94,7 @@ generates landmarks for 4, 8, 16, 32 and 64 landmarks, both greedy and random. # Running the benchmarks First a set of queries is needed. -These can be generated with `generate_benchmark_targets --graph > targets.json`. +This can be done with the `generate_benchmark_targets --graph > targets.json`. This generates 1000 random, distinct source and destination pairs. The `--amount` parameter allows to adjust the number of pairs generated. @@ -112,11 +110,4 @@ are used to answer the query. The benchmark prints out how many nodes were popped from the heap for each run and the average time per route. -`utils/run_benchmarks.py` is a wrapper script that runs the benchmarks for a -big set of parameters. - -`utils/plot_results.py` generates several plots of the results. - -# References - -[1](Computing the Shortest Path: A\* meets Graph Theory, A. Goldberg and C. Harrelson, Microsoft Research, Technical Report MSR-TR-2004-24, 2004) +[1] Computing the Shortest Path: A* meets Graph Theory, A. Goldberg and C. Harrelson, Microsoft Research, Technical Report MSR-TR-2004-24, 2004 diff --git a/landmarks/ocean_handpicked_44.json b/landmarks/handpicked_44.json similarity index 100% rename from landmarks/ocean_handpicked_44.json rename to landmarks/handpicked_44.json diff --git a/src/alt.rs b/src/alt.rs index 4afb7f0..3b3c0c6 100644 --- a/src/alt.rs +++ b/src/alt.rs @@ -137,8 +137,7 @@ impl LandmarkBestSet<'_> { results.reverse(); self.best_landmarks.clear(); - - for result in results[..(self.best_size.min(results.len()))].iter() { + for result in results[..self.best_size].iter() { self.best_landmarks.push(result.0); } } diff --git a/utils/plot_results.py b/utils/plot_results.py deleted file mode 100755 index f72a1e1..0000000 --- a/utils/plot_results.py +++ /dev/null @@ -1,90 +0,0 @@ -#!/usr/bin/env python3 - -from sys import argv, exit -import os -from csv import writer -from typing import Tuple, List -import re -import numpy as np - -import matplotlib.pyplot as plt - -if len(argv) != 2: - print(f"Usage: { argv[0] } ") - exit(1) - -path = argv[1] - -files = [f for f in os.listdir(path) if os.path.isfile(f"{ path }/{f}")] - -files = [f for f in files if re.match(r"greedy_64_.+", f) is not None ] - - -def parse_file(file: str) -> Tuple[float, List[int]]: - - pops = list() - time = None - with open(file) as f: - for line in f.readlines(): - m = re.match(r"popped\s(?P\d+)\s.*", line) - if m is not None: - pops.append(int(m.groupdict()["pops"])) - continue - - m = re.match(r"It took\s(?P