Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks(LSNet)论文解读

本文介绍了LSNet模型,一种在SIGIR 2018会议上发表的深度学习框架,用于多元时间序列预测。LSNet结合CNN和GRU组件捕捉短期和长期模式,通过循环-跳跃组件处理周期性,时间注意力组件动态调整重要性,自回归组件解决规模不敏感问题。实验证明LSNet在周期性数据上表现出色,但对非周期性数据的预测效果一般。

摘要生成于 C知道 ,由 DeepSeek-R1 满血版支持, 前往体验 >

因为自己是做时间序列预测这一块,所以关于阅读的论文也是这一块,主要是深度学习在时间序列预测中的一个应用。其实撇开深度学习而言,时间序列预测本身就有自己的方法:ARIMA、VAR、三指数平滑法、SARIMA等等,还包括机器学习中的方法(回归分析,随机森林,GBDT、Xgboost等等)。因为深度学习被炒得很热,吸引了很多研究人员的目光,所以深度学习在时间序列预测中的应用也越来越受关注,越来越多的论文是基于深度学习的时间序列预测方面的。

之前看了很多论文也没有做一个统一一点的总结,现在借着博客,详细的返回看看之前看过的论文,无论是之后自己再写论文或者做实验,也希望能从这些阅读过的论文中再次获得灵感。今天总结的这篇论文是Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks(LSNet),这篇论文是发表在SIGIR 2018会议上,地址:https://arxiv.org/abs/1703.07015。

摘要

论文的开始是摘要,摘要部分是要详细阅读,在这篇论文的概要部分首先强调了多元时间序列预测在各个领域的重要性,突出论文中的研究意义,然后就阐述了在实际应用中收集到的时间序列数据通常涉及长期和短期模式的混合 (如下图),而对付这样的数据,传统的方法比如自回归模型和高斯分布可能会失败,所以作者说到他们在论文中提出了一种新颖的深度学习框架,也就是LSTNe

### Skeleton-Based Action Recognition Research and Techniques In the field of skeleton-based action recognition, researchers have developed various methods to interpret human actions from skeletal data. These approaches leverage deep learning models that can effectively capture spatial-temporal features inherent in sequences of joint positions over time. One prominent technique involves utilizing recurrent neural networks (RNNs), particularly long short-term memory (LSTM) units or gated recurrent units (GRUs). Such architectures are adept at handling sequential information due to their ability to maintain a form of memory across timesteps[^1]. This characteristic makes them suitable for modeling temporal dependencies present within motion capture datasets. Convolutional Neural Networks (CNNs) also play an essential role when applied on graphs representing skeletons as nodes connected by edges denoting limb segments between joints. Graph Convolutional Networks (GCNs) extend traditional CNN operations onto non-Euclidean domains like point clouds or meshes formed around articulated bodies during movement execution phases[^2]. Furthermore, some studies integrate both RNN variants with GCN layers into hybrid frameworks designed specifically for this task domain; these combined structures aim to simultaneously exploit local appearance cues alongside global structural patterns exhibited throughout entire pose configurations captured frame-by-frame via sensors such as Microsoft Kinect devices or other depth cameras capable of tracking multiple individuals performing diverse activities indoors under varying lighting conditions without requiring any wearable markers attached directly onto participants' limbs/skin surfaces. ```python import torch.nn.functional as F from torch_geometric.nn import GCNConv class ST_GCN(torch.nn.Module): def __init__(self, num_features, hidden_channels, class_num): super(ST_GCN, self).__init__() self.conv1 = GCNConv(num_features, hidden_channels) self.fc1 = Linear(hidden_channels, class_num) def forward(self, x, edge_index): h = self.conv1(x, edge_index) h = F.relu(h) h = F.dropout(h, training=self.training) z = self.fc1(h) return F.log_softmax(z, dim=1) ```
评论 13
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值