Skip to content

🧹 Chore: Understand why memory usage is building over a model run and model steps are slowing down #203

@DavidOry

Description

@DavidOry

From global iteration to global iteration, the model steps in tm2py are taking longer to run and using more memory. In some cases, the runtime increase is substantial. As shown in the spreadsheet attached to #179, the truck component takes 2.46 minutes in the first global iteration, but 4.93 in the third global iteration. As noted in #189, invoking the transit skim component independently takes about half the runtime as when it is invoked as part of a model run. The memory profiling done as part of #179 also shows the memory footprint of the model run growing over the global iterations.

Progress:

  • Sufficiently defined
  • Approach decided
  • Implemented

Considerations

There are numerous things we could experiment with, including:

  1. Putting pauses in the model run to see if memory is freed up and steps can proceed faster. Meaning, are we calling procedures too rapidly for the garbage collector to clean things up?
  2. Experiment by leaving components out to see which are bogging things down. For example, if we just run the truck component three times, does the runtime slow down in each global iteration?
  3. Add profiling code to see which methods are consuming memory and for what period of time.

Number 2. seems like a reasonable and efficient way to start. It's likely that a small number of the computationally-intensive components is the problem. If we can isolate them, we can add pauses first to see if that works and then do the profiling.

@Ennazus, @lmz, @e-lo, @i-am-sijia, @AshishKuls: thoughts?

Metadata

Metadata

Labels

choreoverhead: doesn't add additional functionality, change performance, or refactor code

Type

No type

Projects

Status

In progress

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions