|
I went back and tested my heaviest libdeps algorithm it was taking much longer then when I originally tested it several months ago (not sure what changed that caused the increase in time). I stopped it at 20 mins in and decided to just improve it. The main issue I was running into was computing the redundancy of a given transitive edge, the number of other direct public edges that would create the same transitive edge. Originally my algorithm was recomputing this for every edge in the graph, so precomputing it all once speeds things up. Now it's about 8 seconds to precompute the redundancy and 30 seconds to count the weight of every direct public edge in the graph.
Question: So I can precompute the redundancy during the SCons build and store it in the graph data, but this means bumping the schema version, so the algorithm can only run more recent commits, or I can precompute every time the analyzer is run. This leads me to several options:
- make the algorithm require a new schema version and precompute in the SCons build
- always re-precompute when the analyzer is run
- re-compute in the analyzer and if the graph is wrong schema version and write back to the graph and update the graph files schema version (maintenance concerns, I would prefer analyzer remain read only)
- precompute during the SCons build and also during the analyzer, always precompute if the graph is an older schema version
|