<?xml version="1.0" encoding="UTF-8"?>

<modsCollection xmlns:xlink="http://www.w3.org/1999/xlink" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://www.loc.gov/mods/v3" xsi:schemaLocation="http://www.loc.gov/mods/v3 http://www.loc.gov/standards/mods/v3/mods-3-3.xsd">
<mods version="3.3">

<genre>preprint</genre>

<titleInfo><title>Adaptive higher order reversible integrators for memory efficient deep learning</title></titleInfo>





<name type="personal">
  <namePart type="given">Sofya</namePart>
  <namePart type="family">Maslovskaya</namePart>
  <role><roleTerm type="text">author</roleTerm> </role><identifier type="local">87909</identifier></name>
<name type="personal">
  <namePart type="given">Sina</namePart>
  <namePart type="family">Ober-Blöbaum</namePart>
  <role><roleTerm type="text">author</roleTerm> </role><identifier type="local">16494</identifier></name>
<name type="personal">
  <namePart type="given">Christian</namePart>
  <namePart type="family">Offen</namePart>
  <role><roleTerm type="text">author</roleTerm> </role><identifier type="local">85279</identifier><description xsi:type="identifierDefinition" type="orcid">0000-0002-5940-8057</description></name>
<name type="personal">
  <namePart type="given">Pranav</namePart>
  <namePart type="family">Singh</namePart>
  <role><roleTerm type="text">author</roleTerm> </role></name>
<name type="personal">
  <namePart type="given">Boris Edgar</namePart>
  <namePart type="family">Wembe Moafo</namePart>
  <role><roleTerm type="text">author</roleTerm> </role><identifier type="local">95394</identifier></name>







<name type="corporate">
  <namePart></namePart>
  <identifier type="local">636</identifier>
  <role>
    <roleTerm type="text">department</roleTerm>
  </role>
</name>








<abstract lang="eng">The depth of networks plays a crucial role in the effectiveness of deep learning. However, the memory requirement for backpropagation scales linearly with the number of layers, which leads to memory bottlenecks during training. Moreover, deep networks are often unable to handle time-series data appearing at irregular intervals. These issues can be resolved by considering continuous-depth networks based on the neural ODE framework in combination with reversible integration methods that allow for variable time-steps. Reversibility of the method ensures that the memory requirement for training is independent of network depth, while variable time-steps are required for assimilating time-series data on irregular intervals. However, at present, there are no known higher-order reversible methods with this property. High-order methods are especially important when a high level of accuracy in learning is required or when small time-steps are necessary due to large errors in time integration of neural ODEs, for instance in context of complex dynamical systems such as Kepler systems and molecular dynamics. The requirement of small time-steps when using a low-order method can significantly increase the computational cost of training as well as inference. In this work, we present an approach for constructing high-order reversible methods that allow adaptive time-stepping. Our numerical tests show the advantages in computational speed when applied to the task of learning dynamical systems.</abstract>

<relatedItem type="constituent">
  <location>
    <url displayLabel="2410.09537v2.pdf">https://ris.uni-paderborn.de/download/59794/59795/2410.09537v2.pdf</url>
  </location>
  <physicalDescription><internetMediaType>application/pdf</internetMediaType></physicalDescription>
</relatedItem>
<originInfo><dateIssued encoding="w3cdtf">2025</dateIssued>
</originInfo>
<language><languageTerm authority="iso639-2b" type="code">eng</languageTerm>
</language>



<relatedItem type="host">
  <identifier type="arXiv">2410.09537</identifier>
<part>
</part>
</relatedItem>


<extension>
<bibliographicCitation>
<apa>Maslovskaya, S., Ober-Blöbaum, S., Offen, C., Singh, P., &amp;#38; Wembe Moafo, B. E. (2025). &lt;i&gt;Adaptive higher order reversible integrators for memory efficient deep learning&lt;/i&gt;.</apa>
<mla>Maslovskaya, Sofya, et al. &lt;i&gt;Adaptive Higher Order Reversible Integrators for Memory Efficient Deep Learning&lt;/i&gt;. 2025.</mla>
<bibtex>@article{Maslovskaya_Ober-Blöbaum_Offen_Singh_Wembe Moafo_2025, title={Adaptive higher order reversible integrators for memory efficient deep learning}, author={Maslovskaya, Sofya and Ober-Blöbaum, Sina and Offen, Christian and Singh, Pranav and Wembe Moafo, Boris Edgar}, year={2025} }</bibtex>
<short>S. Maslovskaya, S. Ober-Blöbaum, C. Offen, P. Singh, B.E. Wembe Moafo, (2025).</short>
<ieee>S. Maslovskaya, S. Ober-Blöbaum, C. Offen, P. Singh, and B. E. Wembe Moafo, “Adaptive higher order reversible integrators for memory efficient deep learning.” 2025.</ieee>
<chicago>Maslovskaya, Sofya, Sina Ober-Blöbaum, Christian Offen, Pranav Singh, and Boris Edgar Wembe Moafo. “Adaptive Higher Order Reversible Integrators for Memory Efficient Deep Learning,” 2025.</chicago>
<ama>Maslovskaya S, Ober-Blöbaum S, Offen C, Singh P, Wembe Moafo BE. Adaptive higher order reversible integrators for memory efficient deep learning. Published online 2025.</ama>
</bibliographicCitation>
</extension>
<recordInfo><recordIdentifier>59794</recordIdentifier><recordCreationDate encoding="w3cdtf">2025-05-05T09:25:28Z</recordCreationDate><recordChangeDate encoding="w3cdtf">2025-09-30T15:16:09Z</recordChangeDate>
</recordInfo>
</mods>
</modsCollection>
