天气数据集2-应用RNN做天气预测

二、用循环神经网络做天气(温度)预测

本项目是基于Pytorch的 RNN&GRU模型,用于预测未来温度

  • 数据集: https://mp.weixin.qq.com/s/08BmF4RnnwQ-jX5s_ukDUA

  • 项目代码: https://github.com/disanda/b_code/tree/master/Weather_Prediction

  1. RNN
  • 模型本质是用于预测数据的时序关系

  • 模型的输入和输出是”序列长度可变的”

以下是Pytorch的RNN输入和输出样例

import torch

input_size = 10 #输入数据的维度
output_size= 1  #输出数据的维度
num_layers= 3 #有几层rnn

rnn_case = torch.nn.RNN(input_size, output_size, num_layers, batch_first=True)
 
batch_size = 4 #模型可以批处理数据序列
seq_length1 = 5 #序列长度为5,即输入一个序列的5个连续点
seq_length2 = 9 #序列长度为9,即输入一个序列有9个连续点

x1 = torch.randn(batch_size,seq_length1,input_size)
x2 = torch.randn(batch_size,seq_length2,input_size)

h1_0 = torch.zeros(num_layers,batch_size,output_size)
h2_0 = torch.zeros(num_layers,batch_size,output_size)

y1, h1_1 = rnn_case(x1,h1_0)  
# y1.shape = (batch_size, seq_length1, output_size)  
# h1_1.shape = (num_layers, batch_size, output_size) 

y2, h2_1 = rnn_case(x2,h2_0)
# y2.shape = (batch_size, seq_length1, output_size)  
# h2_1.shape = (num_layers, batch_size, output_size) 

#如果通过前n-1个数据预测第n个数据
y1_out = y1[:,-1,:]
y2_out = y2[:,-1,:]
  1. 数据预处理

2.1 输出数据特征

pandas下是frame的列(columns)

import pandas as pd
import matplotlib.pyplot as plt

csv_path = "mpi_saale_2021b.csv"
data_frame = pd.read_csv(csv_path)
print(data_frame.columns)

# Index(['Date Time', 'p (mbar)', 'T (degC)', 'rh (%)', 'sh (g/kg)', 'Tpot (K)',
#        'Tdew (degC)', 'VPmax (mbar)', 'VPact (mbar)', 'VPdef (mbar)',
#        'H2OC (mmol/mol)', 'rho (g/m**3)', 'wv (m/s)', 'wd (deg)', 'rain (mm)',
#        'SWDR (W/m**2)', 'SDUR (s)', 'TRAD (degC)', 'Rn (W/m**2)',
#        'ST002 (degC)', 'ST004 (degC)', 'ST008 (degC)', 'ST016 (degC)',
#        'ST032 (degC)', 'ST064 (degC)', 'ST128 (degC)', 'SM008 (%)',
#        'SM016 (%)', 'SM032 (%)', 'SM064 (%)', 'SM128 (%)'], dtype='object')

2.2 删掉一些特征

data = df.drop(columns=['Date Time']) #去掉字符串特征

# 去掉其他degC特征
deg_columns = df.filter(like='degC').columns 
filtered_list = [item for item in deg_columns if item != 'T (degC)']
data = data.drop(columns=pd.Index(filtered_list))
#print(data.columns)

2.3 数据标准化

可以把所有特征看成不同货币,完成货币的计量统一


# 标准化特征
scaler = StandardScaler()
data_scaled = scaler.fit_transform(data)
#print(data_scaled)

2.4 序列化和Pytorch批处理

  • y = f(x): x是输入,y是输出,f是模型

  • x序列化, 目的是一个数据单位是一个序列(n-1个数据,每个数据有n个特征)

  • 制作y标签,即预测值(第n个数据的”温度”特征)


# 创建序列数据
X, y = [], []

for i in range(len(data_scaled) - sequence_length):
    X.append(data_scaled[i:i+sequence_length-1])  # 前9个时间步的特征
    y.append(data_scaled[i+sequence_length-1, 1])  #  第10个时间步的第2个特征'T (degC)'作为目标

X = np.array(X)
y = np.array(y)

# 转换为 PyTorch 张量
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')  # 默认为 CPU
X = torch.tensor(X, dtype=torch.float32).to(device)
y = torch.tensor(y, dtype=torch.float32).to(device)

# 划分训练集和测试集
dataset = TensorDataset(X, y)
train_size = int(0.8 * len(dataset))
#test_size = len(dataset) - train_size
#train_dataset, test_dataset = random_split(dataset, [train_size, test_size])

train_dataset = TensorDataset(*dataset[:train_size])
test_dataset = TensorDataset(*dataset[train_size:])

train_loader = DataLoader(train_dataset, batch_size=batch_size, shuffle=False, drop_last=True)
test_loader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False, drop_last=True)
  1. 超参数选择
  • input_size = len(data.columns)

  • hidden_size = 64

  • output_size = 1

  • num_layers = 2

  • num_epochs = 30

  • learning_rate = 5e-5 #0.001

  • batch_size = 8

  • sequence_length = 8 # 输入9个特征,预测第10个特征

  • model_type =‘RNN’ # GRU

  1. 模型

4.1 模型设计

单独放一个文件夹,解耦程序


import torch.nn as nn
class WeatherRNN(nn.Module):
    def __init__(self, input_size, hidden_size, output_size, num_layers=1, model_type='RNN'):
        super(WeatherRNN, self).__init__()
        self.hidden_size = hidden_size
        self.num_layers = num_layers
        if model_type == 'RNN':
            self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True) # batch_first=True, nonlinearity = 'relu'
        elif model_type == 'GRU':
            self.rnn = nn.GRU(input_size, hidden_size, num_layers, batch_first=True)
        self.fc = nn.Linear(hidden_size, output_size)
        #self.dropout = nn.Dropout(p=0.2) 效果变差

    def forward(self, x, h0):
        out, hn = self.rnn(x, h0)
        #out = out[:, -1, :]
        #out = self.dropout(out)
        out = self.fc(out[:, -1, :])
        #out = torch.tanh(out)
        return out, hn

4.2 模型训练

  • 初始化: 1.模型,2.损失函数,3.优化器

  • 训练(前向传播): 输入输出 y= f(x)

  • 训练(反向传播): 输入输出 w’ = f’(x)


# 初始化模型、损失函数和优化器
model = models.WeatherRNN(input_size, hidden_size, output_size, num_layers, model_type = model_type).to(device)
criterion = nn.MSELoss()
optimizer = optim.Adam(model.parameters(), lr=learning_rate)

# 训练模型
model.train()
h0 = torch.zeros(num_layers, batch_size, hidden_size).to(device)
for epoch in range(num_epochs):
    for inputs, targets in train_loader:
        # print(inputs.shape)
        # print(targets.shape)

        # 训练时每次输入 sequence_length - 1 个数据,预测第 sequence_length 个数据
        output, hn = model(inputs, h0)   #inputs = [batch_size, sequence_length-1, features]
        h0 = hn.detach() # [layers, batch_size, hidden_size]
        #print(output.shape)
        #print(hn.shape)
        predictions = output[:, -1]

        loss = criterion(predictions, targets)

        optimizer.zero_grad()
        loss.backward() # retain_graph=True
        max_norm = 2.0  # 设定梯度的最大范数
        torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm) # 使用clip_grad_norm_()函数控制梯度
        optimizer.step()

    print(f'Epoch [{epoch+1}/{num_epochs}], Loss: {loss.item():.7f}')

print("Training complete.")

4.3 模型评估


# 模型评估
model.eval()
test_loss = 0.0
h0 = torch.zeros(num_layers, batch_size, hidden_size).to(device)
with torch.no_grad():
    for inputs, targets in test_loader:
        #print(inputs.shape)
        #print(targets.shape)

        # 预测时每次输入 sequence_length - 1 个数据,预测第 sequence_length 个数据
        output, hn = model(inputs, h0)
        h0 = hn.detach()
        predictions = output[:, -1]

        loss = criterion(predictions, targets)
        test_loss += loss.item()

test_loss /= len(test_loader)
print(f'Test Loss: {test_loss:.7f}')

  1. 小结 & 参考链接

后续可以扩张到股票型数据

5.1 调参

通过看 tain_loss, test_loss 调参
  • Hidden_size,Layer_nums: 与数据规模成正比, 本例应适当调低

  • Learn_rate: 变化过快或震荡可调低

  • Epoch: Loss 有效下降可以增大

5.2 技巧

  • 梯度更新限制
    max_norm = 2.0 # 设定梯度的最大范数
    torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm) # 使用clip_grad_norm_()函数控制梯度

  • Dropout

    适用于参数规模更大的RNN, 不适用本例

5.3 参考链接:

  • 代码: https://blog.paperspace.com/weather-forecast-using-ltsm-networks/
  • 天气数据集: https://www.bgc-jena.mpg.de/wetter/

相关推荐

  1. 天气数据2-应用RNN天气预测

    2024-06-07 07:34:01       9 阅读
  2. AJAX——案例_天气预报

    2024-06-07 07:34:01       15 阅读
  3. 微信小程序实现一个天气预报应用程序

    2024-06-07 07:34:01       41 阅读

最近更新

  1. TCP协议是安全的吗?

    2024-06-07 07:34:01       16 阅读
  2. 阿里云服务器执行yum,一直下载docker-ce-stable失败

    2024-06-07 07:34:01       16 阅读
  3. 【Python教程】压缩PDF文件大小

    2024-06-07 07:34:01       15 阅读
  4. 通过文章id递归查询所有评论(xml)

    2024-06-07 07:34:01       18 阅读

热门阅读

  1. 数据结构:共享栈

    2024-06-07 07:34:01       8 阅读
  2. mysql如何处理无效数据

    2024-06-07 07:34:01       7 阅读