That's a very good question, especially considering that Gephi 0.9 will drop time "intervals" (i.e., continuous time ranges) in favor of "timestamps."
Because there are many networks that need to be analyzed by looking at continuous evolution, using intervals makes it easy to build dynamic graphs that represent the natural state of a network.
However, with timestamps, some variables might have thousands, tens of thousands, hundreds of thousands, or even millions of points in time for which a node (or edge) must be specified as "on" or "off." This would mean that data files representing dynamic graphs would have the potential to be very large.
Here's what the "Rebuilding Gephi's core for the 0.9 version" (https://gephi.org/2013/rebuilding-gephis-core-for-the-0-9-version/
) page says:
One pain point is the way we decided to represent the time. Essentially, there are two ways to represent time for a particular node in a graph: timestamps or intervals. Timestamps are a list of points where the particular nodes exist and intervals have a beginning and an end. For multiple reasons, we thought intervals would be easier to manipulate and more efficient than a (possibly very large) set of timestamps. By talking to our users, we found that intervals are rarely used in real-world data. On the code side, we also found that it makes things much more complex and not that efficient at the end.
In future versions, we’ll remove support for intervals and add timestamps instead. We considered supporting both intervals and timestamps but decided that it would add too much complexity and confusion.
I'm curious what the limits there will be for timestamped nodes and edges.