Deep Learning

5 Types of LSTM Recurrent Neural Networks

December 28, 2023
14 min read
EXX-Blog-5-types-recurrent-NN-and-what-to-do.jpg

Before LSTMs - Recurrent Neural Networks

Utilizing past experiences to enhance future performance is a key aspect of deep learning, as well as machine learning in general.

In neural networks, performance improvement through experience is encoded by model parameters called weights, serving as very long-term memory. After learning from a training set of annotated examples, a neural network is better equipped to make accurate decisions when presented with new, similar examples that it hasn't encountered before. This is the core principle of supervised deep learning, where clear one-to-one mappings exist, such as in image classification tasks.

While many datasets naturally exhibit sequential patterns, requiring consideration of both order and content, sequence data examples include video, music, and DNA sequences. Recurrent neural networks (RNNs) are commonly employed for learning from such sequential data. A standard RNN can be thought of as a feed-forward neural network unfolded over time, incorporating weighted connections between hidden states to provide short-term memory. However, the challenge lies in the inherent limitation of this short-term memory, akin to the difficulty of training very deep networks.

The vanishing gradient problem, encountered during back-propagation through many hidden layers, affects RNNs, limiting their ability to capture long-term dependencies. This issue arises from the repeated multiplication of an error signal by values less than 1.0, causing signal attenuation at each layer.

In summary, learning from experience is foundational to machine learning, and while RNNs address short-term memory needs for sequential data, challenges like the vanishing gradient problem persist, constraining the capture of long-term dependencies.

Adding Artificial Memory to Neural Networks

In deep learning, overcoming the vanishing gradients challenge led to the adoption of new activation functions (e.g., ReLUs) and innovative architectures (e.g., ResNet and DenseNet) in feed-forward neural networks. For recurrent neural networks (RNNs), an early solution involved initializing recurrent layers to perform a chaotic non-linear transformation of input data.

This approach, known as reservoir computing, intentionally sets the recurrent system to be nearly unstable through feedback and parameter initialization. Learning is confined to a simple linear layer added to the output, allowing satisfactory performance on various tasks while bypassing the vanishing gradient problem.

However, reservoir-type RNNs face limitations, as the dynamic reservoir must be very near unstable for long-term dependencies to persist. This can lead to output instability over time with continued stimuli, and there's no direct learning on the lower/earlier parts of the network. Sepp Hochreiter addressed the vanishing gradients problem, leading to the invention of Long Short-Term Memory (LSTM) recurrent neural networks in 1997.

LSTMs excel at learning long-term dependencies, thanks to a persistent cell-state module that remains unchanged through time, undergoing only a few linear operations at each time step. This enables LSTMs to remember short-term memories for an extended duration.

Despite various suggested modifications, the classic LSTM variant continues to achieve state-of-the-art results on cutting-edge tasks over 20 years later. Nonetheless, several LSTM variants exist, each serving specific purposes.


Running deep learning models is no easy feat and with a customizable AI Training Exxact server, realize your fullest computational potential and reduce cloud usage for a lower TCO in the long run.


1. LSTM Classic

Long Short-Term Memory (LSTM), introduced by Sepp Hochreiter and Jürgen Schmidhuber in 1997, is a type of recurrent neural network (RNN) architecture designed to handle long-term dependencies. The key innovation of LSTM lies in its ability to selectively store, update, and retrieve information over extended sequences, making it particularly well-suited for tasks involving sequential data.

The structure of an LSTM network comprises memory cells, input gates, forget gates, and output gates. Memory cells serve as the long-term storage, input gates control the flow of new information into the memory cells, forget gates regulate the removal of irrelevant information, and output gates determine the output based on the current state of the memory cells. This intricate architecture enables LSTMs to effectively capture and remember patterns in sequential data while mitigating the vanishing and exploding gradient problems that often plague traditional RNNs.

The strengths of LSTMs lie in their ability to model long-range dependencies, making them especially useful in tasks such as natural language processing, speech recognition, and time series prediction. They excel in scenarios where the relationships between elements in a sequence are complex and extend over significant periods. LSTMs have proven effective in various applications, including machine translation, sentiment analysis, and handwriting recognition. Their robustness in handling sequential data with varying time lags has contributed to their widespread adoption in both academia and industry.

2. Bidirectional LSTM (BiLSTM)

Bidirectional Long Short-Term Memory (BiLSTM) is an extension of the traditional LSTM architecture that incorporates bidirectional processing to enhance its ability to capture contextual information from both past and future inputs. Introduced as an improvement over unidirectional LSTMs, BiLSTMs are particularly effective in tasks where understanding the context of a sequence in both directions is crucial, such as natural language processing and speech recognition.

The structure of a BiLSTM involves two separate LSTM layers—one processing the input sequence from the beginning to the end (forward LSTM), and the other processing it in reverse order (backward LSTM). The outputs from both directions are concatenated at each time step, providing a comprehensive representation that considers information from both preceding and succeeding elements in the sequence. This bidirectional approach enables BiLSTMs to capture richer contextual dependencies and make more informed predictions.

The strengths of BiLSTMs lie in their ability to capture long-range dependencies and contextual information more effectively than unidirectional LSTMs. By processing sequences in both directions, BiLSTMs excel in tasks such as named entity recognition, sentiment analysis, and machine translation, where understanding the context of a word or phrase requires considering both its past and future context. The bidirectional nature of BiLSTMs makes them versatile and well-suited for a wide range of sequential data analysis applications.

BiLSTMs are commonly used in natural language processing tasks, including part-of-speech tagging, named entity recognition, and sentiment analysis. They are also applied in speech recognition, where bidirectional processing helps in capturing relevant phonetic and contextual information. Additionally, BiLSTMs find use in time series prediction and biomedical data analysis, where considering information from both directions enhances the model's ability to discern meaningful patterns in the data.

3. Gated Recurrent Unit (GRU)

Diagrammatically, a Gated Recurrent Unit (GRU) looks more complicated than a classical LSTM. In fact, it's a bit simpler, and due to its relative simplicity trains a little faster than the traditional LSTM. GRUs combine the gating functions of the input gate j and the forget gate f into a single update gate z.

Practically that means that cell state positions earmarked for forgetting will be matched by entry points for new data. Another key difference of the GRU is that the cell state and hidden output h have been combined into a single hidden state layer, while the unit also contains an intermediate, internal hidden state.

The strengths of GRUs lie in their ability to capture dependencies in sequential data efficiently, making them well-suited for tasks where computational resources are a constraint. GRUs have demonstrated success in various applications, including natural language processing, speech recognition, and time series analysis. They are especially useful in scenarios where real-time processing or low-latency applications are essential due to their faster training times and simplified structure.

GRUs are commonly used in natural language processing tasks such as language modeling, machine translation, and sentiment analysis. In speech recognition, GRUs excel at capturing temporal dependencies in audio signals. Moreover, they find applications in time series forecasting, where their efficiency in modeling sequential dependencies is valuable for predicting future data points. The simplicity and effectiveness of GRUs have contributed to their adoption in both research and practical implementations, offering an alternative to more complex recurrent architectures.

4. ConvLSTM (Convolution LSTM)

Convolutional Long Short-Term Memory (ConvLSTM) is a hybrid neural network architecture that combines the strengths of convolutional neural networks (CNNs) and Long Short-Term Memory (LSTM) networks. It is specifically designed to process spatiotemporal information in sequential data, such as video frames or time series data. ConvLSTM was introduced to capture both spatial patterns and temporal dependencies simultaneously, making it well-suited for tasks involving dynamic visual sequences.

The structure of ConvLSTM incorporates the concepts of both CNNs and LSTMs. Instead of using traditional fully connected layers, ConvLSTM employs convolutional operations within the LSTM cells. This allows the model to learn spatial hierarchies and abstract representations while maintaining the ability to capture long-term dependencies over time. ConvLSTM cells are particularly effective at capturing complex patterns in data where both spatial and temporal relationships are crucial.

The strengths of ConvLSTM lie in its ability to model complex spatiotemporal dependencies in sequential data. This makes it a powerful tool for tasks such as video prediction, action recognition, and object tracking in videos. ConvLSTM is capable of automatically learning hierarchical representations of spatial and temporal features, enabling it to discern patterns and variations in dynamic sequences. It is especially advantageous in scenarios where understanding the evolution of patterns over time is essential.

ConvLSTM is commonly used in computer vision applications, particularly in video analysis and prediction tasks. For example, it finds applications in predicting future frames in a video sequence, where understanding the spatial-temporal evolution of the scene is crucial. ConvLSTM has also been employed in remote sensing for analyzing time series data, such as satellite imagery, to capture changes and patterns over different time intervals. The architecture's ability to simultaneously handle spatial and temporal dependencies makes it a versatile choice in various domains where dynamic sequences are encountered.

5. LSTMs With Attention Mechanism

Finally, we arrive at what is probably the most transformative innovation in sequence models in recent memory*. Attention in machine learning refers to a model's ability to focus on specific elements in data, in our case the hidden state outputs of LSTMs. This dynamic focus enables the model to better capture context and improve performance on tasks involving sequential information.

The structure of LSTM with attention mechanisms involves incorporating attention mechanisms into the LSTM architecture. Attention mechanisms consist of attention weights that determine the importance of each input element at a given time step. These weights are dynamically adjusted during model training based on the relevance of each element to the current prediction. By attending to specific parts of the sequence, the model can effectively capture dependencies, especially in long sequences, without being overwhelmed by irrelevant information.

The strengths of LSTM with attention mechanisms lie in its ability to capture fine-grained dependencies in sequential data. The attention mechanism enables the model to selectively focus on the most relevant parts of the input sequence, improving its interpretability and performance. This architecture is particularly powerful in natural language processing tasks, such as machine translation and sentiment analysis, where the context of a word or phrase in a sentence is crucial for accurate predictions.

LSTM with attention mechanisms is commonly used in machine translation tasks, where it excels in aligning source and target language sequences effectively. In sentiment analysis, attention mechanisms help the model emphasize keywords or phrases that contribute to the sentiment expressed in a given text. The application of LSTM with attention extends to various other sequential data tasks where capturing context and dependencies is paramount.

The significant successes of LSTMs with attention to natural language processing foreshadowed the decline of LSTMs in the best language models. With increasingly powerful computational resources available for NLP research, state-of-the-art models now routinely make use of a memory-hungry architectural style known as the transformer.

Transformers do away with LSTMs in favor of feed-forward encoders/decoders with attention. Attention transformers obviate the need for cell-state memory by picking and choosing from an entire sequence fragment at once, using attention to focus on the most important parts. BERT, GPT, and other major language models all follow this approach.

On the other hand, state-of-the-art NLP models incur a significant economic and environmental impact to train from scratch, requiring resources available mainly to research labs associated with wealthy tech companies. The massive energy requirements for these big transformer models make transfer learning all the more important, but it also leaves plenty of room for LSTM-based sequence-to-sequence models to make meaningful contributions to tasks sufficiently different from those the big language transformers are trained for.

Understanding LSTM Is Crucial for Good Performance in Your Project

Standard LSTMs, with their memory cells and gating mechanisms, serve as the foundational architecture for capturing long-term dependencies. BiLSTMs enhance this capability by processing sequences bidirectionally, enabling a more comprehensive understanding of context. GRUs, with simplified structures and gating mechanisms, offer computational efficiency without sacrificing effectiveness. ConvLSTMs seamlessly integrate convolutional operations with LSTM cells, making them well-suited for spatiotemporal data. LSTMs with attention mechanisms dynamically focus on relevant parts of input sequences, improving interpretability and capturing fine-grained dependencies.

Choosing the most suitable LSTM architecture for a project depends on the specific characteristics of the data and the nature of the task. For projects requiring a deep understanding of long-range dependencies and sequential context, standard LSTMs or BiLSTMs might be preferable. In scenarios where computational efficiency is crucial, GRUs could offer a balance between effectiveness and speed. ConvLSTMs are apt choices for tasks involving spatiotemporal data, such as video analysis. If interpretability and precise attention to detail are essential, LSTMs with attention mechanisms provide a nuanced approach.

Ultimately, the choice of LSTM architecture should align with the project requirements, data characteristics, and computational constraints. Understanding the strengths and unique features of each LSTM variant enables practitioners to make informed decisions, ensuring that the selected architecture is well-suited for the intricacies of the specific sequential data analysis task at hand. As the field of deep learning continues to evolve, ongoing research and advancements may introduce new LSTM architectures, further expanding the toolkit available for tackling diverse challenges in sequential data processing.


Training complex models and utilizing LSTMs are in the end, compute intensive. Exxact offers customizable workstations and servers for powering your data center and computational needs. Contact us today!


Topics

EXX-Blog-5-types-recurrent-NN-and-what-to-do.jpg
Deep Learning

5 Types of LSTM Recurrent Neural Networks

December 28, 202314 min read

Before LSTMs - Recurrent Neural Networks

Utilizing past experiences to enhance future performance is a key aspect of deep learning, as well as machine learning in general.

In neural networks, performance improvement through experience is encoded by model parameters called weights, serving as very long-term memory. After learning from a training set of annotated examples, a neural network is better equipped to make accurate decisions when presented with new, similar examples that it hasn't encountered before. This is the core principle of supervised deep learning, where clear one-to-one mappings exist, such as in image classification tasks.

While many datasets naturally exhibit sequential patterns, requiring consideration of both order and content, sequence data examples include video, music, and DNA sequences. Recurrent neural networks (RNNs) are commonly employed for learning from such sequential data. A standard RNN can be thought of as a feed-forward neural network unfolded over time, incorporating weighted connections between hidden states to provide short-term memory. However, the challenge lies in the inherent limitation of this short-term memory, akin to the difficulty of training very deep networks.

The vanishing gradient problem, encountered during back-propagation through many hidden layers, affects RNNs, limiting their ability to capture long-term dependencies. This issue arises from the repeated multiplication of an error signal by values less than 1.0, causing signal attenuation at each layer.

In summary, learning from experience is foundational to machine learning, and while RNNs address short-term memory needs for sequential data, challenges like the vanishing gradient problem persist, constraining the capture of long-term dependencies.

Adding Artificial Memory to Neural Networks

In deep learning, overcoming the vanishing gradients challenge led to the adoption of new activation functions (e.g., ReLUs) and innovative architectures (e.g., ResNet and DenseNet) in feed-forward neural networks. For recurrent neural networks (RNNs), an early solution involved initializing recurrent layers to perform a chaotic non-linear transformation of input data.

This approach, known as reservoir computing, intentionally sets the recurrent system to be nearly unstable through feedback and parameter initialization. Learning is confined to a simple linear layer added to the output, allowing satisfactory performance on various tasks while bypassing the vanishing gradient problem.

However, reservoir-type RNNs face limitations, as the dynamic reservoir must be very near unstable for long-term dependencies to persist. This can lead to output instability over time with continued stimuli, and there's no direct learning on the lower/earlier parts of the network. Sepp Hochreiter addressed the vanishing gradients problem, leading to the invention of Long Short-Term Memory (LSTM) recurrent neural networks in 1997.

LSTMs excel at learning long-term dependencies, thanks to a persistent cell-state module that remains unchanged through time, undergoing only a few linear operations at each time step. This enables LSTMs to remember short-term memories for an extended duration.

Despite various suggested modifications, the classic LSTM variant continues to achieve state-of-the-art results on cutting-edge tasks over 20 years later. Nonetheless, several LSTM variants exist, each serving specific purposes.


Running deep learning models is no easy feat and with a customizable AI Training Exxact server, realize your fullest computational potential and reduce cloud usage for a lower TCO in the long run.


1. LSTM Classic

Long Short-Term Memory (LSTM), introduced by Sepp Hochreiter and Jürgen Schmidhuber in 1997, is a type of recurrent neural network (RNN) architecture designed to handle long-term dependencies. The key innovation of LSTM lies in its ability to selectively store, update, and retrieve information over extended sequences, making it particularly well-suited for tasks involving sequential data.

The structure of an LSTM network comprises memory cells, input gates, forget gates, and output gates. Memory cells serve as the long-term storage, input gates control the flow of new information into the memory cells, forget gates regulate the removal of irrelevant information, and output gates determine the output based on the current state of the memory cells. This intricate architecture enables LSTMs to effectively capture and remember patterns in sequential data while mitigating the vanishing and exploding gradient problems that often plague traditional RNNs.

The strengths of LSTMs lie in their ability to model long-range dependencies, making them especially useful in tasks such as natural language processing, speech recognition, and time series prediction. They excel in scenarios where the relationships between elements in a sequence are complex and extend over significant periods. LSTMs have proven effective in various applications, including machine translation, sentiment analysis, and handwriting recognition. Their robustness in handling sequential data with varying time lags has contributed to their widespread adoption in both academia and industry.

2. Bidirectional LSTM (BiLSTM)

Bidirectional Long Short-Term Memory (BiLSTM) is an extension of the traditional LSTM architecture that incorporates bidirectional processing to enhance its ability to capture contextual information from both past and future inputs. Introduced as an improvement over unidirectional LSTMs, BiLSTMs are particularly effective in tasks where understanding the context of a sequence in both directions is crucial, such as natural language processing and speech recognition.

The structure of a BiLSTM involves two separate LSTM layers—one processing the input sequence from the beginning to the end (forward LSTM), and the other processing it in reverse order (backward LSTM). The outputs from both directions are concatenated at each time step, providing a comprehensive representation that considers information from both preceding and succeeding elements in the sequence. This bidirectional approach enables BiLSTMs to capture richer contextual dependencies and make more informed predictions.

The strengths of BiLSTMs lie in their ability to capture long-range dependencies and contextual information more effectively than unidirectional LSTMs. By processing sequences in both directions, BiLSTMs excel in tasks such as named entity recognition, sentiment analysis, and machine translation, where understanding the context of a word or phrase requires considering both its past and future context. The bidirectional nature of BiLSTMs makes them versatile and well-suited for a wide range of sequential data analysis applications.

BiLSTMs are commonly used in natural language processing tasks, including part-of-speech tagging, named entity recognition, and sentiment analysis. They are also applied in speech recognition, where bidirectional processing helps in capturing relevant phonetic and contextual information. Additionally, BiLSTMs find use in time series prediction and biomedical data analysis, where considering information from both directions enhances the model's ability to discern meaningful patterns in the data.

3. Gated Recurrent Unit (GRU)

Diagrammatically, a Gated Recurrent Unit (GRU) looks more complicated than a classical LSTM. In fact, it's a bit simpler, and due to its relative simplicity trains a little faster than the traditional LSTM. GRUs combine the gating functions of the input gate j and the forget gate f into a single update gate z.

Practically that means that cell state positions earmarked for forgetting will be matched by entry points for new data. Another key difference of the GRU is that the cell state and hidden output h have been combined into a single hidden state layer, while the unit also contains an intermediate, internal hidden state.

The strengths of GRUs lie in their ability to capture dependencies in sequential data efficiently, making them well-suited for tasks where computational resources are a constraint. GRUs have demonstrated success in various applications, including natural language processing, speech recognition, and time series analysis. They are especially useful in scenarios where real-time processing or low-latency applications are essential due to their faster training times and simplified structure.

GRUs are commonly used in natural language processing tasks such as language modeling, machine translation, and sentiment analysis. In speech recognition, GRUs excel at capturing temporal dependencies in audio signals. Moreover, they find applications in time series forecasting, where their efficiency in modeling sequential dependencies is valuable for predicting future data points. The simplicity and effectiveness of GRUs have contributed to their adoption in both research and practical implementations, offering an alternative to more complex recurrent architectures.

4. ConvLSTM (Convolution LSTM)

Convolutional Long Short-Term Memory (ConvLSTM) is a hybrid neural network architecture that combines the strengths of convolutional neural networks (CNNs) and Long Short-Term Memory (LSTM) networks. It is specifically designed to process spatiotemporal information in sequential data, such as video frames or time series data. ConvLSTM was introduced to capture both spatial patterns and temporal dependencies simultaneously, making it well-suited for tasks involving dynamic visual sequences.

The structure of ConvLSTM incorporates the concepts of both CNNs and LSTMs. Instead of using traditional fully connected layers, ConvLSTM employs convolutional operations within the LSTM cells. This allows the model to learn spatial hierarchies and abstract representations while maintaining the ability to capture long-term dependencies over time. ConvLSTM cells are particularly effective at capturing complex patterns in data where both spatial and temporal relationships are crucial.

The strengths of ConvLSTM lie in its ability to model complex spatiotemporal dependencies in sequential data. This makes it a powerful tool for tasks such as video prediction, action recognition, and object tracking in videos. ConvLSTM is capable of automatically learning hierarchical representations of spatial and temporal features, enabling it to discern patterns and variations in dynamic sequences. It is especially advantageous in scenarios where understanding the evolution of patterns over time is essential.

ConvLSTM is commonly used in computer vision applications, particularly in video analysis and prediction tasks. For example, it finds applications in predicting future frames in a video sequence, where understanding the spatial-temporal evolution of the scene is crucial. ConvLSTM has also been employed in remote sensing for analyzing time series data, such as satellite imagery, to capture changes and patterns over different time intervals. The architecture's ability to simultaneously handle spatial and temporal dependencies makes it a versatile choice in various domains where dynamic sequences are encountered.

5. LSTMs With Attention Mechanism

Finally, we arrive at what is probably the most transformative innovation in sequence models in recent memory*. Attention in machine learning refers to a model's ability to focus on specific elements in data, in our case the hidden state outputs of LSTMs. This dynamic focus enables the model to better capture context and improve performance on tasks involving sequential information.

The structure of LSTM with attention mechanisms involves incorporating attention mechanisms into the LSTM architecture. Attention mechanisms consist of attention weights that determine the importance of each input element at a given time step. These weights are dynamically adjusted during model training based on the relevance of each element to the current prediction. By attending to specific parts of the sequence, the model can effectively capture dependencies, especially in long sequences, without being overwhelmed by irrelevant information.

The strengths of LSTM with attention mechanisms lie in its ability to capture fine-grained dependencies in sequential data. The attention mechanism enables the model to selectively focus on the most relevant parts of the input sequence, improving its interpretability and performance. This architecture is particularly powerful in natural language processing tasks, such as machine translation and sentiment analysis, where the context of a word or phrase in a sentence is crucial for accurate predictions.

LSTM with attention mechanisms is commonly used in machine translation tasks, where it excels in aligning source and target language sequences effectively. In sentiment analysis, attention mechanisms help the model emphasize keywords or phrases that contribute to the sentiment expressed in a given text. The application of LSTM with attention extends to various other sequential data tasks where capturing context and dependencies is paramount.

The significant successes of LSTMs with attention to natural language processing foreshadowed the decline of LSTMs in the best language models. With increasingly powerful computational resources available for NLP research, state-of-the-art models now routinely make use of a memory-hungry architectural style known as the transformer.

Transformers do away with LSTMs in favor of feed-forward encoders/decoders with attention. Attention transformers obviate the need for cell-state memory by picking and choosing from an entire sequence fragment at once, using attention to focus on the most important parts. BERT, GPT, and other major language models all follow this approach.

On the other hand, state-of-the-art NLP models incur a significant economic and environmental impact to train from scratch, requiring resources available mainly to research labs associated with wealthy tech companies. The massive energy requirements for these big transformer models make transfer learning all the more important, but it also leaves plenty of room for LSTM-based sequence-to-sequence models to make meaningful contributions to tasks sufficiently different from those the big language transformers are trained for.

Understanding LSTM Is Crucial for Good Performance in Your Project

Standard LSTMs, with their memory cells and gating mechanisms, serve as the foundational architecture for capturing long-term dependencies. BiLSTMs enhance this capability by processing sequences bidirectionally, enabling a more comprehensive understanding of context. GRUs, with simplified structures and gating mechanisms, offer computational efficiency without sacrificing effectiveness. ConvLSTMs seamlessly integrate convolutional operations with LSTM cells, making them well-suited for spatiotemporal data. LSTMs with attention mechanisms dynamically focus on relevant parts of input sequences, improving interpretability and capturing fine-grained dependencies.

Choosing the most suitable LSTM architecture for a project depends on the specific characteristics of the data and the nature of the task. For projects requiring a deep understanding of long-range dependencies and sequential context, standard LSTMs or BiLSTMs might be preferable. In scenarios where computational efficiency is crucial, GRUs could offer a balance between effectiveness and speed. ConvLSTMs are apt choices for tasks involving spatiotemporal data, such as video analysis. If interpretability and precise attention to detail are essential, LSTMs with attention mechanisms provide a nuanced approach.

Ultimately, the choice of LSTM architecture should align with the project requirements, data characteristics, and computational constraints. Understanding the strengths and unique features of each LSTM variant enables practitioners to make informed decisions, ensuring that the selected architecture is well-suited for the intricacies of the specific sequential data analysis task at hand. As the field of deep learning continues to evolve, ongoing research and advancements may introduce new LSTM architectures, further expanding the toolkit available for tackling diverse challenges in sequential data processing.


Training complex models and utilizing LSTMs are in the end, compute intensive. Exxact offers customizable workstations and servers for powering your data center and computational needs. Contact us today!


Topics