Posts: 813 Liked others: 180 Was liked: 134
Rank: 2d
GD Posts: 422
KGS: quantumf/komi/balvenie/loads more
Any suggestions for how to construct a tough/pathological problem for BuildDependencies? I've tried very long (100,000 files that depend in sequence on one another) and very wide (100,000 files that depend on a single file), but my code is near instantaneous with these, and yet I'm getting TLE on test 2. Are we expected to deal with circular dependencies?
Posts: 648 Location: Littleton, CO Liked others: 230 Was liked: 188
Rank: KGS 4k
Universal go server handle: jeromie
quantumf wrote:
Any suggestions for how to construct a tough/pathological problem for BuildDependencies? I've tried very long (100,000 files that depend in sequence on one another) and very wide (100,000 files that depend on a single file), but my code is near instantaneous with these, and yet I'm getting TLE on test 2. Are we expected to deal with circular dependencies?
There is extra, unused data in the file, I.e. Files that do not need to be rebuilt. It is those extra unconnected files that gave my algorithm fits. (I think.)
Posts: 1828 Location: Bellevue, WA Liked others: 89 Was liked: 818
Rank: AGA 5d
KGS: Capsule 4d
Tygem: 치킨까스 5d
Well I managed to get E and F as I'd hoped (I think I spent more time tinkering with input and forgetting about cin.ignore() than the actual algorithm for E...), but with only 30 minutes before I have to visit family and not coming back until past the deadline, it looks like I won't be able to squeeze D in (although I'll most likely be pondering the problem at the dinner table...).
Posts: 131 Liked others: 163 Was liked: 41
Rank: Chinese 3d
KGS: 3d
DGS: 3d
OGS: 3d
Overlooking the following statement for Problem F costs me 26 trials to figure out: (I even used 'try - except ValueError' statements to catch the weird input exception)
"The input for each room consists of one or more lines"
but finally I got all problems despite the huge penalties
Edit: Ah, it seems some of you have already pointed out the pitfall ...
Posts: 1828 Location: Bellevue, WA Liked others: 89 Was liked: 818
Rank: AGA 5d
KGS: Capsule 4d
Tygem: 치킨까스 5d
Leaderboard
bernds - 16 +5
Solomon - 12 +5
dfan - 9
ugoertz - 9 +1
gamesorry - 7 +6
jeromie - 5 +3
quantumf - 5 +2
peti29 - 4 +2
Kirby - 3 +1
I'll get around to writing up Problem F (XYZZY) (the key algorithm is either Bellman-Ford or Shortest Path Faster Algorithm (SPFA), but you can't just use either out-of-the-box; having a hard time explaining this succinctly and clearly (--> my lack of true understanding!)), or bernds, gamesorry, or jeromie can post their explanation of it if they have time. Code posted for A, B and D, didn't bother for C since it looks like most people got it but happy to provide if requested (or someone else who got the problem).
Problem A (Coast Length)
I used floodfill twice, once on the edges of the grid to mark the seas (to distinguish them from lakes; I just relabeled them as '2', but creating a boolean matrix also works), and then floodfill again on land to count the coasts, being careful to handle the "edge" cases.
void floodfill_mark_seas(int r, int c, int N, int M, std::vector<std::vector<int>> &grid, std::vector<std::pair<int, int>> &directions) { if (r >= N || c >= M || r < 0 || c < 0) return; if (grid[r][c] != 0) return;
grid[r][c] = 2; for (int i = 0; i < 4; ++i) { floodfill_mark_seas(r + directions[i].first, c + directions[i].second, N, M, grid, directions); } }
int floodfill_count_coast(int r, int c, int N, int M, std::vector<std::vector<int>> &grid, std::vector<std::pair<int, int>> &directions) { if (r >= N || c >= M || r < 0 || c < 0) return 0; if (grid[r][c] != 1) return 0;
int coast_count = 0; coast_count += (r == 0) + (r > 0 and grid[r - 1][c] == 2) + (c == 0) + (c > 0 and grid[r][c - 1] == 2) + (r == N - 1) + (r < N - 1 and grid[r + 1][c] == 2) + (c == M - 1) + (c < M - 1 and grid[r][c + 1] == 2);
grid[r][c] = -1; for (int i = 0; i < 4; ++i) { coast_count += floodfill_count_coast(r + directions[i].first, c + directions[i].second, N, M, grid, directions); }
return coast_count; }
int main() { int N, M; std::cin >> N >> M; std::vector<std::vector<int>> grid(N, std::vector<int>(M)); std::vector<std::pair<int, int>> directions = { {1, 0}, {0, 1}, {-1, 0}, {0, -1} };
for (int i = 0; i < N; ++i) { std::string row; std::cin >> row; for (int j = 0; j < M; ++j) { grid[i][j] = row[j] - '0'; }
}
// distinguish seas (2) from lakes (0) for (int i = 0; i < N; ++i) { for (int j = 0; j < M; ++j) { if (grid[i][j] == 0 and ((i == 0 || i == N - 1) or (j == 0 || j == M - 1))) { floodfill_mark_seas(i, j, N, M, grid, directions); } } }
int res = 0; for (int i = 0; i < N; ++i) { for (int j = 0; j < M; ++j) { if (grid[i][j] == 1) { res += floodfill_count_coast(i, j, N, M, grid, directions); } } }
std::cout << res << std::endl; }
Problem B (Arctic Network)
This is a minimum spanning tree (MST) problem, where the tidbit regarding satellites allow it to be a forest to reduce the total weight (n satellites = n trees). To actually compute the value, you could use either Prim's or Kruskal's algorithm. I chose to use Kruskal's algorithm, but only because I find it easier to grasp and it's nice to use the Union-Find disjoint set structure (for cycle checking). Once the number of disjoint sets in UF reaches the number of satellites, we can stop.
One thing to note is that a common theme in this contest was using an adjacency list and/or edge list, and I think this problem, as well as every other problem except for Problem A, utilized adjacency lists. Anyways, code below; small nitpick to myself is that I should have used a class to keep track of coordinates instead of pairs.
Like Problem A, used floodfill here, but instead of an implicit 2d grid, I used an adjacency list to store the vertices and edges.
Problem E (Build Dependencies)
This is a topological sort problem. For me, the only annoying part was handling the input correctly (e.g., segfaulting until I realized I forgot to cin.ignore()). I got a bit lazy using two maps to label the files as integers (str -> int, int -> str), which I'm pretty sure is not necessary. Regardless, code below:
This round has been a little bit bitter for me even though I got C for first try which was refreshing. Maybe C# is at disadvantage after all?
The story of my solution for A:
First I implemented a naive recursive algorithm which hit memory limit (strangely, because I didn't allocate anything in the recursive function except for local status variables, so I could've understood a stack overflow for example). Then I went to search a proper flood fill algorithm on wikipedia. Then I hit time limit. (I expanded the map with a one pixel thick border of ocean so that I don't need to worry about islands touching the edge.) I then started to think about a totally different geometric solution involving polygons and cw/ccw edges - but then gazing at the example map some more I realized that I wouldn't be able to tell apart lakes from bays inside self-touching polygons. I became desperate and went to find a solution for this problem by name (i.e. cheating), but what I found was like a one sentence solution basically telling: "flood fill". What?! I already tried that! So then I finally added a speed improvement: I "pre-flooded" the ocean with a primitive routine that only checked for immediate neighbors, one run from the top left and one run from the bottom right. Then I went over the map for a third time and whenever I found an unspecified water cell that had an "ocean" neighbor, I started a flood fill from that cell with the proper algorithm. That allowed me to pass this test but man, was it a lot of hassle.
Solomon: I'll need to look at your code some more before I get what you're doing there.
I used floodfill twice, once on the edges of the grid to mark the seas (to distinguish them from lakes; I just relabeled them as '2', but creating a boolean matrix also works), and then floodfill again on land to count the coasts, being careful to handle the "edge" cases.
Funny, I thought I was being silly for using a flood fill (ish) algorithm using bitmaps. I think the way this was supposed to be solved is as follows: increase N and M by two to make a border around the map, marked as ocean (as opposed to just "water"). Then treat the squares as graph nodes and just run a simple graph traversal starting at any ocean node, and following ocean and water, converting the latter to the former. Edges for the graph don't need to be represented because they are obvious. In the end, look at all not-ocean squares and add the number of neighbouring ocean squares to the length of the coastline.
Problem B (Arctic Network)
Quote:
This is a minimum spanning tree (MST) problem, where the tidbit regarding satellites allow it to be a forest to reduce the total weight (n satellites = n trees). To actually compute the value, you could use either Prim's or Kruskal's algorithm. I chose to use Kruskal's algorithm, but only because I find it easier to grasp and it's nice to use the Union-Find disjoint set structure (for cycle checking). Once the number of disjoint sets in UF reaches the number of satellites, we can stop.
One thing to note is that a common theme in this contest was using an adjacency list and/or edge list, and I think this problem, as well as every other problem except for Problem A, utilized adjacency lists. Anyways, code below; small nitpick to myself is that I should have used a class to keep track of coordinates instead of pairs.
Took me a while to realize that this was a MST problem. I started out trying to compute the maximum distance required to cross every pair by looking at i->j->k paths and replacing i->k if the other is better. That's an O(n^3) algorithm though. I did realize later that there can't be cycles and so I'm looking at a tree, and then it clicked that it must be the MST. My program still occasionally gives different answers than yours, differing by a fraction so maybe I missed something else, perhaps in how placing the satellites affects it.
Problem C (Dominoes 2)
Quote:
Like Problem A, used floodfill here, but instead of an implicit 2d grid, I used an adjacency list to store the vertices and edges.
I associate flood fill with a graphical algorithm. Here, I'd just call it graph traversal, any kind. DFS probably, and I had edge lists.
Problem D (Chess)
This wants us to test whether there are loops in a directed graph. There's a small wrinkle with the draws: I found it best to make two passes over the input. First, we use only the draws to make equivalence classes between players. Then we construct a graph out of only those classes, not the actual players. Finally, compute for every class whether a graph traversal makes a cycle. Here I found that it makes a difference between using iteration vs. recursion: the recursive algorithm can easily be made to solve the problem for every node we encounter along the way, reducing the amount of effort required significantly.
Problem E (Build Dependencies)
Quote:
This is a topological sort problem. For me, the only annoying part was handling the input correctly (e.g., segfaulting until I realized I forgot to cin.ignore()). I got a bit lazy using two maps to label the files as integers (str -> int, int -> str), which I'm pretty sure is not necessary.
I treated this as a scheduling problem. Once again, just a graph traversal - but this time we run two of them. The first is to count how often a node is visited, that gives us the number of dependencies it has. The second time we decrease the number of dependencies and print it out when the counter reaches zero. I did build a little simple chained hash table for the filenames->node mapping.
Problem F (XYZZY)
The thing to do immediately is to run a backwards traversal to see which nodes can actually reach the exit, and remove all the others to simplify the following. Then, I'm not sure whether there's a standard algorithm for the rest. I labelled edges with the amount of energy they require the player to have, and the delta it applies to the energy level (it felt a little like the Pieces of Parentheses problem...). I used two sub-steps in a loop. First is to compute the highest energy you can have when you reach any given node. Just look at every edge and see if you already have sufficient energy to take it, and if so see if the node at the source plus the delta is higher than the energy at the destination. Once you've done such an update, you may be able to discover new edges or improve their deltas: if there's a path i->j->k, and you know you can achieve enough energy at i to reach k, then there's an edge from i to k. You iterate until you either reach the exit or find a cycle (identical source and destination) that increases energy (which means you can just go through it until you have enough energy to make it to th end). You also stop if you find you cannot make progress, that's the failure state.
Getting the hide and quote tags right is harder than some of the problems... how do people feel, do we actually need to hide discussion (I can see hiding the code)?
I liked the problems since for all of them there were enough sucessful submissions, also in Python, to prove that hitting the time limit means that the choice of algorithm is not ideal. (Although maybe true, this was not as obvious for the second round.)
Besides this, this time my usual approach of "inventing" the required algorithm myself rather than looking at books or so turned out to be particularly unsuitable ... (given that I never studied any graph theory algorithms before ). Nevertheless, I enjoyed thinking about the problems, and find the comments about your approaches very instructive.
I had some ideas about A, B, and D, but did not have time to implement them. (I think that in D, processing all ='s first and identifying all those nodes reduces the problem to one where only <'s are given. This just means detecting the (non-)existence of a cycle in a directed graph, right?)
I might have a correct (but too slow) solution for E, but I could be missing something ...
When I first attempted to do problem F I definitively underestimated this problem, and I see now that my submitted programs cannot work. At the end I had the idea to first find a spanning tree, starting from the start node, and as a second step to find all cycles in the graph (at this point I did some research and found Johnson's algorithm). Then one could check, in each case, whether running through the cycle costs energy (then we can ignore it), or whether it increases the energy (in this case, if we can reach a node of the cycle from the start node without running out of energy, and can reach the final node from the cycle, then we win). I did not implement this though. Any comments?
Posts: 1828 Location: Bellevue, WA Liked others: 89 Was liked: 818
Rank: AGA 5d
KGS: Capsule 4d
Tygem: 치킨까스 5d
peti29 wrote:
This round has been a little bit bitter for me even though I got C for first try which was refreshing. Maybe C# is at disadvantage after all?
The story of my solution for A:
First I implemented a naive recursive algorithm which hit memory limit (strangely, because I didn't allocate anything in the recursive function except for local status variables, so I could've understood a stack overflow for example). Then I went to search a proper flood fill algorithm on wikipedia. Then I hit time limit. (I expanded the map with a one pixel thick border of ocean so that I don't need to worry about islands touching the edge.) I then started to think about a totally different geometric solution involving polygons and cw/ccw edges - but then gazing at the example map some more I realized that I wouldn't be able to tell apart lakes from bays inside self-touching polygons. I became desperate and went to find a solution for this problem by name (i.e. cheating), but what I found was like a one sentence solution basically telling: "flood fill". What?! I already tried that! So then I finally added a speed improvement: I "pre-flooded" the ocean with a primitive routine that only checked for immediate neighbors, one run from the top left and one run from the bottom right. Then I went over the map for a third time and whenever I found an unspecified water cell that had an "ocean" neighbor, I started a flood fill from that cell with the proper algorithm. That allowed me to pass this test but man, was it a lot of hassle.
Solomon: I'll need to look at your code some more before I get what you're doing there.
I went ahead and added my implementation for A in case it might help...
Posts: 1828 Location: Bellevue, WA Liked others: 89 Was liked: 818
Rank: AGA 5d
KGS: Capsule 4d
Tygem: 치킨까스 5d
bernds wrote:
Getting the hide and quote tags right is harder than some of the problems... how do people feel, do we actually need to hide discussion (I can see hiding the code)?
Good point, I agree that we should just keep code hidden but discussion of solutions should be okay not to hide...
Problem A (coast problem): My first attempt was to first fill in any lakes, and then iterate through each cell to see if it bordered water. To fill the lakes, I had a routine to check for a path to the outside. Then if I found none, I did flood fill on the lake to turn it into land. This was too slow. I ended up changing the initial map to add a border of water around the edge. Then, I started at position (0,0), then did a search for all land edges. This was fast enough to pass the test cases.
Problem C (domino one): I tried both with array and hash table to add each of the "rules" (domino X tips domino Y) to a list. Then, I'd take the domino that was tipped, query the array/hash table, and see which dominoes it tipped over. For each of those dominoes, I queried the array/hash table to see what ones they tipped over. This seemed to me to be a simple and straightforward way to solve the problem. But I'd either run out of memory or run out of time. I got frustrated/angry, because it seemed like it should be an efficient solution to the problem. So I quit, and didn't work on the problems, anymore.
I guess I was kind of a baby in my reaction to Problem B, but it was too depressing to continue.
_________________ Discipline is remembering what you want. -David Campbell
Getting the hide and quote tags right is harder than some of the problems... how do people feel, do we actually need to hide discussion (I can see hiding the code)?
Nowadays I hide most of my long posts (outside of This 'n' That), as a courtesy to those who want to skip over them. You can more easily scroll past them if they are hidden.
Problem C (domino one): I tried both with array and hash table to add each of the "rules" (domino X tips domino Y) to a list. Then, I'd take the domino that was tipped, query the array/hash table, and see which dominoes it tipped over. For each of those dominoes, I queried the array/hash table to see what ones they tipped over. This seemed to me to be a simple and straightforward way to solve the problem. But I'd either run out of memory or run out of time. I got frustrated/angry, because it seemed like it should be an efficient solution to the problem. So I quit, and didn't work on the problems, anymore.
Have you worked with graphs before? There are typically two ways of representing them, either with a two-dimensional array that holds information for each node pair i and j whether there is an edge between them, or by associating a list of edges with each node.
Here's my dominoes2 attempt, which shows one way of organizing the edges as a list (or rather a dynamically expanded array but it's the same thing really). Also, an iterative traversal algorithm. I've used variants of these pieces in several of the problems, but this here is the most basic form.
Posts: 648 Location: Littleton, CO Liked others: 230 Was liked: 188
Rank: KGS 4k
Universal go server handle: jeromie
I did the same thing as Kirby for Problem A (coastline). I found it kind of funny that I used a flood fill algorithm to solve that one based on the problem statement.
For Problem C (Dominoes), I made each domino a node in a linked list with a push method. As long as a domino was not already knocked over, it would set itself and all of its children to a fallen state. At the end I just iterated through the list of dominoes and counted how many were fallen. (I could have improved performance by keeping track of how many were fallen as they were pushed over, but this was fast enough.)
For Problem F (xyzzy), I did a modified breadth first search based on the algorithm for finding the shortest path through a weighted graph. I didn't do true cycle testing because I was afraid it would be too slow, so I just added a line that checked if I had visited the same node 100 times. If I did (and energy was increasing each time), I just set the current player energy to 100000 and blocked that node from being visited again. I suppose it's a bit of a cheat, but I figured the heuristic would be good enough for this problem.
The only other one I had time to attempt was the build dependency problem. I implemented a topological sorting algorithm that I'm pretty sure was correct, but it was too slow on some of the inputs. The topological sort removes nodes that no longer have a parent, and I think my algorithm bogged down because there were "parent" nodes that were irrelevant to the problem. (For instance if C depends on A and B and A is the file we change, the fact that B is a parent of C is irrelevant.) I handled this case in the algorithm, but it depended on a lot of (expensive) checks for list membership. I'm pretty sure that if I pruned the unnecessary nodes from the graph before running the topological sort that it would have been fast enough, but I didn't have a chance to go back and make the change.
Posts: 1828 Location: Bellevue, WA Liked others: 89 Was liked: 818
Rank: AGA 5d
KGS: Capsule 4d
Tygem: 치킨까스 5d
ugoertz wrote:
When I first attempted to do problem F I definitively underestimated this problem, and I see now that my submitted programs cannot work. At the end I had the idea to first find a spanning tree, starting from the start node, and as a second step to find all cycles in the graph (at this point I did some research and found Johnson's algorithm). Then one could check, in each case, whether running through the cycle costs energy (then we can ignore it), or whether it increases the energy (in this case, if we can reach a node of the cycle from the start node without running out of energy, and can reach the final node from the cycle, then we win). I did not implement this though. Any comments?
Haven't heard of Johnson's algorithm, but I'll take a look and see if I can swap it out with my Bellman-Ford implementation and get it to work!
Problem C (domino one): I tried both with array and hash table to add each of the "rules" (domino X tips domino Y) to a list. Then, I'd take the domino that was tipped, query the array/hash table, and see which dominoes it tipped over. For each of those dominoes, I queried the array/hash table to see what ones they tipped over. This seemed to me to be a simple and straightforward way to solve the problem. But I'd either run out of memory or run out of time. I got frustrated/angry, because it seemed like it should be an efficient solution to the problem. So I quit, and didn't work on the problems, anymore.
Have you worked with graphs before? There are typically two ways of representing them, either with a two-dimensional array that holds information for each node pair i and j whether there is an edge between them, or by associating a list of edges with each node.
Here's my dominoes2 attempt, which shows one way of organizing the edges as a list (or rather a dynamically expanded array but it's the same thing really). Also, an iterative traversal algorithm. I've used variants of these pieces in several of the problems, but this here is the most basic form.
int main (int argc, char **argv) { int ntc; int discard = scanf ("%d\n", &ntc);
while (ntc-- > 0) { int n, m, l; scanf ("%d%d%d\n", &n, &m, &l); node *nodes = calloc (sizeof (node), n); int *worklist = calloc (sizeof (int), n); uint64_t *bitmap = calloc (sizeof (uint64_t), (n + 7) / 8); while (m-- > 0) { int x, y; scanf ("%d%d\n", &x, &y); add_edge (nodes + x - 1, nodes + y - 1); } int count = 0; while (l-- > 0) { int x; scanf ("%d\n", &x); x--; if (test_bit (bitmap, x)) continue; dpr ("considering %d\n", x); set_bit (bitmap, x); int n_work = 1; worklist[0] = x; count++; while (n_work > 0) { int t = worklist[--n_work]; for (int i = 0; i < nodes[t].n_edges; i++) { int t2 = nodes[t].edges[i] - nodes; if (test_bit (bitmap, t2)) continue; dpr ("found new edge: %d %d\n", t, t2); set_bit (bitmap, t2); count++; worklist[n_work++] = t2; } } } free (bitmap); free (worklist); for (int i = 0; i < n; i++) { free (nodes[i].edges); } free (nodes); printf ("%d\n", count); } exit (0); }
I've worked with graphs before, yes. For this approach, I was storing a collection of edges.
Here was my first attempt:
Code:
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading.Tasks;
namespace CoastLength { class Program { static void Main(string[] args) { int numTestCases = Int32.Parse(Console.In.ReadLine());
for (int i = 0; i < numTestCases; ++i) { string[] nml = Console.In.ReadLine().Split(' '); int n = Int32.Parse(nml[0]); int m = Int32.Parse(nml[1]); int l = Int32.Parse(nml[2]);
Dictionary<int, HashSet<int>> knockout = new Dictionary<int, HashSet<int>>(); for (int j = 0; j < m; ++j) { string[] rule = Console.In.ReadLine().Split(' '); int pre = Int32.Parse(rule[0]); int post = Int32.Parse(rule[1]);
if (knockout.ContainsKey(pre)) { if (!knockout[pre].Contains(post)) knockout[pre].Add(post); } else { knockout.Add(pre, new HashSet<int>() { post }); } }
int totalFalls = 1; for (int k = 0; k < l; ++k) { int key = Int32.Parse(Console.In.ReadLine());
static int getMatchesForKey(int key, Dictionary<int, HashSet<int>> map) { if (!map.ContainsKey(key)) return 0;
int totalFalls = 0; HashSet<int> matches = map[key]; foreach (int match in matches) { ++totalFalls; totalFalls += getMatchesForKey(match, map); }
return totalFalls; } } }
I didn't think it's that unreasonable - each key in the dictionary represents a node, which has a hashset of nodes (i.e. edges). Once the first domino is tipped, I jump around the dictionary to find the next item. I can see that you are actually setting a bitmap. I suppose that's more efficient than the dictionary structure I used.
The way I see it, though, adding a given edge is done in O(1) time. Traversing the list of dominoes is on the order of the number of dominoes in the chain.
But I see that my approach doesn't account for cycles in the graph. I just thought about that. Maybe that's why I ran into problems? :-p
_________________ Discipline is remembering what you want. -David Campbell
I didn't think it's that unreasonable - each key in the dictionary represents a node, which has a hashset of nodes (i.e. edges).
I might not know enough Java, but this sounds unnecessarily complicated. A mapping from an integer to something else is just an array, or a vector, especially when you know the number of elements beforehand. Surely Java has things like that? So that's the natural way to store data for the nodes. And if you know your algorithm is going to want to look at all edges for a given node, it makes sense to store them in something simple like a list or a growable vector/array. The more complex data structures might work but it seems wasteful and for more complex problems you'd need to consider how much time it takes to traverse them.
Quote:
But I see that my approach doesn't account for cycles in the graph. I just thought about that. Maybe that's why I ran into problems? :-p
I didn't think it's that unreasonable - each key in the dictionary represents a node, which has a hashset of nodes (i.e. edges).
I might not know enough Java, but this sounds unnecessarily complicated. A mapping from an integer to something else is just an array, or a vector, especially when you know the number of elements beforehand. Surely Java has things like that? So that's the natural way to store data for the nodes. And if you know your algorithm is going to want to look at all edges for a given node, it makes sense to store them in something simple like a list or a growable vector/array. The more complex data structures might work but it seems wasteful and for more complex problems you'd need to consider how much time it takes to traverse them.
Yep, I used an array next, and ran out of memory. The reason I opted for a dictionary as the first choice is that, for a large number of dominoes and a small number of rules, the data structure doesn't have to be very big. Using an array, you have to allocate such that there's an index for every different value of n, so the array could be pretty big, even when there aren't that many rules (unless you create your own hashing mechanism). Anyway, I did try an array keeping the rest the same, and ran out of memory.
(It's written in C#, by the way.)
Quote:
Quote:
But I see that my approach doesn't account for cycles in the graph. I just thought about that. Maybe that's why I ran into problems? :-p
Could be
Yeah, if cycles are acceptable, my solution doesn't handle that. I jump around the hash table (or array), following each of the Y nodes in a rule saying that X knocks down Y. I never take anything out of the mapping, a cycle will produce an infinite loop. If that's all the problem was, this was just a case of me not understanding the implications of the problem.
I was frustrated that something that seemed efficient kept failing - maybe I was blinded by that, if this is all the problem was.
_________________ Discipline is remembering what you want. -David Campbell
Users browsing this forum: No registered users and 2 guests
You cannot post new topics in this forum You cannot reply to topics in this forum You cannot edit your posts in this forum You cannot delete your posts in this forum You cannot post attachments in this forum