{ "version": "https://jsonfeed.org/version/1", "title": "Torch", "description": "Torch is a scientific computing framework with wide support for machine learning algorithms that puts GPUs first. It is easy to use and efficient, thanks to an easy and fast scripting language, LuaJIT, and an underlying C/CUDA implementation.", "home_page_url": "go/torch", "feed_url": "feed/torch.json", "icon": "https://cdn.v2ex.com/navatar/2bca/b9d9/995_large.png?m=1582782020", "favicon": "https://cdn.v2ex.com/navatar/2bca/b9d9/995_normal.png?m=1582782020", "items": [ { "author": { "url": "member/RYS", "name": "RYS", "avatar": "https://cdn.v2ex.com/gravatar/12e74a3c35952923ae61f3614558db1b?s=73&d=retro" }, "url": "t/1067756", "title": "Torch \u51c9\u4e86\u5417\uff1f\u8fd9\u4e2a\u4e3b\u9898\u600e\u4e48\u8fd9\u4e48\u4e45\u6ca1\u6709\u65b0\u8282\u70b9\u4e86", "id": "t/1067756", "date_published": "2024-08-26T02:20:50+00:00", "content_html": "" }, { "author": { "url": "member/xing393939", "name": "xing393939", "avatar": "https://cdn.v2ex.com/avatar/9096/5ec7/18925_large.png?m=1652855853" }, "url": "t/987091", "date_modified": "2023-10-31T06:21:06+00:00", "content_html": "

\u8bad\u7ec3\u4ee3\u7801\u5982\u4e0b\uff1a

\n
import numpy as np\nimport torch\n\n# 1.prepare dataset\nxy = np.loadtxt(\"redPacket_2.csv\", skiprows=1, delimiter=\",\", dtype=np.float32)\nx_data = torch.from_numpy(xy[:, :-1])\ny_data = torch.from_numpy(xy[:, [-1]])\n\n# 2.design model using class\nclass Model(torch.nn.Module):\n def __init__(self):\n super(Model, self).__init__()\n self.linear1 = torch.nn.Linear(4, 2)\n self.linear2 = torch.nn.Linear(2, 1)\n self.activate = torch.nn.ReLU()\n self.sigmoid = torch.nn.Sigmoid()\n\n def forward(self, x):\n x = self.activate(self.linear1(x))\n x = self.sigmoid(self.linear2(x))\n return x\n\ndevice = torch.device(\"cuda\" if torch.cuda.is_available() else \"cpu\")\nmodel = Model().to(device)\nx_data = x_data.to(device)\ny_data = y_data.to(device)\n\n# 3.construct loss and optimizer\ncriterion = torch.nn.BCELoss(reduction=\"mean\")\noptimizer = torch.optim.SGD(model.parameters(), lr=0.01)\n\n# 4.training cycle forward, backward, update\nfor epoch in range(10000):\n y_pred = model(x_data)\n loss = criterion(y_pred, y_data)\n if epoch % 100 == 0:\n print(\n \"epoch %9d loss %.3f\" % (epoch, loss.item()),\n model.linear2.weight.data,\n model.linear2.bias.data,\n )\n\n optimizer.zero_grad()\n loss.backward()\n optimizer.step()\n
\n

\u8bad\u7ec3\u96c6\u4e0b\u8f7d\u5728\u8fd9\u91cc\uff0c\u6211\u6bcf\u9694 100 \u5468\u671f\u6253\u5370\u6a21\u578b\u7684\u635f\u5931\u503c\u548c\u6a21\u578b\u53c2\u6570\uff0c\u7ed3\u679c\u5982\u4e0b\uff1a

\n
epoch 0 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 100 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 200 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 300 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 400 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 500 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 600 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 700 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 800 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 900 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 1000 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 1100 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 1200 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\nepoch 1300 loss 50.000 tensor([[0.0944, 0.1484]], device='cuda:0') tensor([0.4391], device='cuda:0')\n
\n

\u4e0d\u77e5\u9053\u4e3a\u4ec0\u4e48\u4f1a\u4e0d\u6536\u655b\uff0c\u662f\u54ea\u91cc\u9700\u8981\u6539\u8fdb\u5417\uff1f

\n", "date_published": "2023-10-31T06:20:25+00:00", "title": "\u5199\u4e86\u4e00\u4e2a\u7b80\u5355\u7684\u4e8c\u5206\u7c7b\u6a21\u578b\uff0c\u4f46\u662f\u8bad\u7ec3\u4e86 N \u6b21\u6a21\u578b\u53c2\u6570\u90fd\u6ca1\u6709\u52a8\u9759", "id": "t/987091" }, { "author": { "url": "member/1722332572", "name": "1722332572", "avatar": "https://cdn.v2ex.com/avatar/31e3/2c4d/135473_large.png?m=1739072858" }, "url": "t/561876", "date_modified": "2020-02-27T05:40:30+00:00", "content_html": "

PyTorch \u5b66\u4e60\u6559\u7a0b\u3001\u624b\u518c

\n\n

PyTorch \u89c6\u9891\u6559\u7a0b

\n\n

NLP&PyTorch \u5b9e\u6218

\n\n

CV&PyTorch \u5b9e\u6218

\n\n

PyTorch \u8bba\u6587\u63a8\u8350

\n\n

PyTorch \u4e66\u7c4d\u63a8\u8350

\n

\u76f8\u8f83\u4e8e\u76ee\u524d Tensorflow \u7c7b\u578b\u7684\u4e66\u7c4d\u5df2\u7ecf\u70c2\u5927\u8857\u7684\u72b6\u51b5\uff0cPyTorch \u7c7b\u7684\u4e66\u7c4d\u76ee\u524d\u5df2\u51fa\u7248\u7684\u5e76\u6ca1\u6709\u90a3\u4e48\u591a\uff0c\u7b14\u8005\u7ed9\u5927\u5bb6\u63a8\u8350\u6211\u8ba4\u4e3a\u8fd8\u4e0d\u9519\u7684\u56db\u672c PyTorch \u4e66\u7c4d\u3002

\n\n

\u6b22\u8fce Star Fork : https://github.com/INTERMT/Awesome-PyTorch-Chinese

\n", "date_published": "2019-05-07T08:59:52+00:00", "title": "[\u5e72\u8d27] \u53f2\u4e0a\u6700\u5168\u7684 PyTorch \u5b66\u4e60\u8d44\u6e90\u6c47\u603b import torch as tf", "id": "t/561876" }, { "author": { "url": "member/1722332572", "name": "1722332572", "avatar": "https://cdn.v2ex.com/avatar/31e3/2c4d/135473_large.png?m=1739072858" }, "url": "t/521234", "title": "PyTorch 60 \u5206\u949f\u5b89\u88c5\u5165\u95e8\u6559\u7a0b", "id": "t/521234", "date_published": "2018-12-26T08:54:19+00:00", "content_html": "PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1aPyTorch \u6df1\u5ea6\u5b66\u4e60\u5b98\u65b9\u5165\u95e8\u4e2d\u6587\u6559\u7a0b
http://pytorchchina.com/2018/06/25/what-is-pytorch/
PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1a\u81ea\u52a8\u5fae\u5206
http://pytorchchina.com/2018/12/25/autograd-automatic-differentiation/
PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1a\u795e\u7ecf\u7f51\u7edc
http://pytorchchina.com/2018/12/25/neural-networks/
PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1aPyTorch \u8bad\u7ec3\u5206\u7c7b\u5668
http://pytorchchina.com/2018/12/25/training-a-classifier/
PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1a\u6570\u636e\u5e76\u884c\u5904\u7406
http://pytorchchina.com/2018/12/11/optional-data-parallelism/
PyTorch 60 \u5206\u949f\u5b89\u88c5\u5165\u95e8\u6559\u7a0b
http://pytorchchina.com" }, { "author": { "url": "member/1722332572", "name": "1722332572", "avatar": "https://cdn.v2ex.com/avatar/31e3/2c4d/135473_large.png?m=1739072858" }, "url": "t/516574", "title": "PyTorch \u5b89\u88c5\u6559\u7a0b", "id": "t/516574", "date_published": "2018-12-11T08:55:40+00:00", "content_html": "PyTorch windows \u5b89\u88c5\u6559\u7a0b\uff1a\u4e24\u884c\u4ee3\u7801\u641e\u5b9a PyTorch \u5b89\u88c5
http://pytorchchina.com/2018/12/11/pytorch-windows-install-1/
PyTorch Mac \u5b89\u88c5\u6559\u7a0b
http://pytorchchina.com/2018/12/11/pytorch-mac-install/
PyTorch Linux \u5b89\u88c5\u6559\u7a0b
http://pytorchchina.com/2018/12/11/pytorch-linux-install/" }, { "author": { "url": "member/1722332572", "name": "1722332572", "avatar": "https://cdn.v2ex.com/avatar/31e3/2c4d/135473_large.png?m=1739072858" }, "url": "t/516422", "title": "PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b", "id": "t/516422", "date_published": "2018-12-11T03:26:21+00:00", "content_html": "\u4ec0\u4e48\u662f PyTorch?
PyTorch \u662f\u4e00\u4e2a\u57fa\u4e8e Python \u7684\u79d1\u5b66\u8ba1\u7b97\u5305\uff0c\u4e3b\u8981\u5b9a\u4f4d\u4e24\u7c7b\u4eba\u7fa4\uff1a

NumPy \u7684\u66ff\u4ee3\u54c1\uff0c\u53ef\u4ee5\u5229\u7528 GPU \u7684\u6027\u80fd\u8fdb\u884c\u8ba1\u7b97\u3002
\u6df1\u5ea6\u5b66\u4e60\u7814\u7a76\u5e73\u53f0\u62e5\u6709\u8db3\u591f\u7684\u7075\u6d3b\u6027\u548c\u901f\u5ea6
\u5f00\u59cb\u5b66\u4e60
Tensors (\u5f20\u91cf)
Tensors \u7c7b\u4f3c\u4e8e NumPy \u7684 ndarrays\uff0c\u540c\u65f6 Tensors \u53ef\u4ee5\u4f7f\u7528 GPU \u8fdb\u884c\u8ba1\u7b97\u3002

from __future__ import print_function
import torch
\u6784\u9020\u4e00\u4e2a 5\u00d73 \u77e9\u9635\uff0c\u4e0d\u521d\u59cb\u5316\u3002

x = torch.empty(5, 3)
print(x)
\u8f93\u51fa:

tensor(1.00000e-04 *
[[-0.0000, 0.0000, 1.5135],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000],
[ 0.0000, 0.0000, 0.0000]])


\u6784\u9020\u4e00\u4e2a\u968f\u673a\u521d\u59cb\u5316\u7684\u77e9\u9635\uff1a

x = torch.rand(5, 3)
print(x)
\u8f93\u51fa:

tensor([[ 0.6291, 0.2581, 0.6414],
[ 0.9739, 0.8243, 0.2276],
[ 0.4184, 0.1815, 0.5131],
[ 0.5533, 0.5440, 0.0718],
[ 0.2908, 0.1850, 0.5297]])


\u6784\u9020\u4e00\u4e2a\u77e9\u9635\u5168\u4e3a 0\uff0c\u800c\u4e14\u6570\u636e\u7c7b\u578b\u662f long.

Construct a matrix filled zeros and of dtype long:

x = torch.zeros(5, 3, dtype=torch.long)
print(x)
\u8f93\u51fa:

tensor([[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0]])
\u6784\u9020\u4e00\u4e2a\u5f20\u91cf\uff0c\u76f4\u63a5\u4f7f\u7528\u6570\u636e\uff1a

x = torch.tensor([5.5, 3])
print(x)
\u8f93\u51fa:

tensor([ 5.5000, 3.0000])
\u521b\u5efa\u4e00\u4e2a tensor \u57fa\u4e8e\u5df2\u7ecf\u5b58\u5728\u7684 tensor\u3002

x = x.new_ones(5, 3, dtype=torch.double)
# new_* methods take in sizes
print(x)

x = torch.randn_like(x, dtype=torch.float)
# override dtype!
print(x)
# result has the same size
\u8f93\u51fa:

tensor([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]], dtype=torch.float64)
tensor([[-0.2183, 0.4477, -0.4053],
[ 1.7353, -0.0048, 1.2177],
[-1.1111, 1.0878, 0.9722],
[-0.7771, -0.2174, 0.0412],
[-2.1750, 1.3609, -0.3322]])
\u83b7\u53d6\u5b83\u7684\u7ef4\u5ea6\u4fe1\u606f:

print(x.size())
\u8f93\u51fa:

torch.Size([5, 3])
\u6ce8\u610f

torch.Size \u662f\u4e00\u4e2a\u5143\u7ec4\uff0c\u6240\u4ee5\u5b83\u652f\u6301\u5de6\u53f3\u7684\u5143\u7ec4\u64cd\u4f5c\u3002

\u64cd\u4f5c
\u5728\u63a5\u4e0b\u6765\u7684\u4f8b\u5b50\u4e2d\uff0c\u6211\u4eec\u5c06\u4f1a\u770b\u5230\u52a0\u6cd5\u64cd\u4f5c\u3002

\u52a0\u6cd5: \u65b9\u5f0f 1

y = torch.rand(5, 3)
print(x + y)
Out:

tensor([[-0.1859, 1.3970, 0.5236],
[ 2.3854, 0.0707, 2.1970],
[-0.3587, 1.2359, 1.8951],
[-0.1189, -0.1376, 0.4647],
[-1.8968, 2.0164, 0.1092]])
\u52a0\u6cd5: \u65b9\u5f0f 2

print(torch.add(x, y))
Out:

tensor([[-0.1859, 1.3970, 0.5236],
[ 2.3854, 0.0707, 2.1970],
[-0.3587, 1.2359, 1.8951],
[-0.1189, -0.1376, 0.4647],
[-1.8968, 2.0164, 0.1092]])
\u52a0\u6cd5: \u63d0\u4f9b\u4e00\u4e2a\u8f93\u51fa tensor \u4f5c\u4e3a\u53c2\u6570

result = torch.empty(5, 3)
torch.add(x, y, out=result)
print(result)
Out:

tensor([[-0.1859, 1.3970, 0.5236],
[ 2.3854, 0.0707, 2.1970],
[-0.3587, 1.2359, 1.8951],
[-0.1189, -0.1376, 0.4647],
[-1.8968, 2.0164, 0.1092]])
\u52a0\u6cd5: in-place

# adds x to y
y.add_(x)
print(y)
Out:

tensor([[-0.1859, 1.3970, 0.5236],
[ 2.3854, 0.0707, 2.1970],
[-0.3587, 1.2359, 1.8951],
[-0.1189, -0.1376, 0.4647],
[-1.8968, 2.0164, 0.1092]])
Note

\u6ce8\u610f

\u4efb\u4f55\u4f7f\u5f20\u91cf\u4f1a\u53d1\u751f\u53d8\u5316\u7684\u64cd\u4f5c\u90fd\u6709\u4e00\u4e2a\u524d\u7f00 \u2018_\u2019\u3002\u4f8b\u5982\uff1ax.copy_(y), x.t_(), \u5c06\u4f1a\u6539\u53d8 x.

\u4f60\u53ef\u4ee5\u4f7f\u7528\u6807\u51c6\u7684 NumPy \u7c7b\u4f3c\u7684\u7d22\u5f15\u64cd\u4f5c

print(x[:, 1])
Out:

tensor([ 0.4477, -0.0048, 1.0878, -0.2174, 1.3609])
\u6539\u53d8\u5927\u5c0f\uff1a\u5982\u679c\u4f60\u60f3\u6539\u53d8\u4e00\u4e2a tensor \u7684\u5927\u5c0f\u6216\u8005\u5f62\u72b6\uff0c\u4f60\u53ef\u4ee5\u4f7f\u7528 torch.view:

x = torch.randn(4, 4)
y = x.view(16)
z = x.view(-1, 8) # the size -1 is inferred from other dimensions
print(x.size(), y.size(), z.size())
Out:

torch.Size([4, 4]) torch.Size([16]) torch.Size([2, 8])
\u5982\u679c\u4f60\u6709\u4e00\u4e2a\u5143\u7d20 tensor\uff0c\u4f7f\u7528 .item() \u6765\u83b7\u5f97\u8fd9\u4e2a value\u3002

x = torch.randn(1)
print(x)
print(x.item())
Out:

tensor([ 0.9422])
0.9422121644020081

PyTorch \u5165\u95e8\u6559\u7a0b\uff1a http://pytorchchina.com/2018/06/25/what-is-pytorch/" }, { "author": { "url": "member/1722332572", "name": "1722332572", "avatar": "https://cdn.v2ex.com/avatar/31e3/2c4d/135473_large.png?m=1739072858" }, "url": "t/516416", "title": "PyTorch 60 \u5206\u949f\u5165\u95e8\u6559\u7a0b\uff1a\u6570\u636e\u5e76\u884c\u5904\u7406", "id": "t/516416", "date_published": "2018-12-11T03:15:52+00:00", "content_html": "\u53ef\u9009\u62e9\uff1a\u6570\u636e\u5e76\u884c\u5904\u7406\uff08\u6587\u672b\u6709\u5b8c\u6574\u4ee3\u7801\u4e0b\u8f7d\uff09
\u4f5c\u8005\uff1aSung Kim \u548c Jenny Kang

\u5728\u8fd9\u4e2a\u6559\u7a0b\u4e2d\uff0c\u6211\u4eec\u5c06\u5b66\u4e60\u5982\u4f55\u7528 DataParallel \u6765\u4f7f\u7528\u591a GPU\u3002
\u901a\u8fc7 PyTorch \u4f7f\u7528\u591a\u4e2a GPU \u975e\u5e38\u7b80\u5355\u3002\u4f60\u53ef\u4ee5\u5c06\u6a21\u578b\u653e\u5728\u4e00\u4e2a GPU\uff1a

device = torch.device(\"cuda:0\")
model.to(device)
\u7136\u540e\uff0c\u4f60\u53ef\u4ee5\u590d\u5236\u6240\u6709\u7684\u5f20\u91cf\u5230 GPU\uff1a

mytensor = my_tensor.to(device)
\u8bf7\u6ce8\u610f\uff0c\u53ea\u662f\u8c03\u7528 my_tensor.to(device) \u8fd4\u56de\u4e00\u4e2a my_tensor \u65b0\u7684\u590d\u5236\u5728 GPU \u4e0a\uff0c\u800c\u4e0d\u662f\u91cd\u5199 my_tensor\u3002\u4f60\u9700\u8981\u5206\u914d\u7ed9\u4ed6\u4e00\u4e2a\u65b0\u7684\u5f20\u91cf\u5e76\u4e14\u5728 GPU \u4e0a\u4f7f\u7528\u8fd9\u4e2a\u5f20\u91cf\u3002

\u5728\u591a GPU \u4e2d\u6267\u884c\u524d\u9988\uff0c\u540e\u9988\u64cd\u4f5c\u662f\u975e\u5e38\u81ea\u7136\u7684\u3002\u5c3d\u7ba1\u5982\u6b64\uff0cPyTorch \u9ed8\u8ba4\u53ea\u4f1a\u4f7f\u7528\u4e00\u4e2a GPU\u3002\u901a\u8fc7\u4f7f\u7528 DataParallel \u8ba9\u4f60\u7684\u6a21\u578b\u5e76\u884c\u8fd0\u884c\uff0c\u4f60\u53ef\u4ee5\u5f88\u5bb9\u6613\u7684\u5728\u591a GPU \u4e0a\u8fd0\u884c\u4f60\u7684\u64cd\u4f5c\u3002

model = nn.DataParallel(model)
\u8fd9\u662f\u6574\u4e2a\u6559\u7a0b\u7684\u6838\u5fc3\uff0c\u6211\u4eec\u63a5\u4e0b\u6765\u5c06\u4f1a\u8be6\u7ec6\u8bb2\u89e3\u3002
\u5f15\u7528\u548c\u53c2\u6570

\u5f15\u5165 PyTorch \u6a21\u5757\u548c\u5b9a\u4e49\u53c2\u6570

import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
# \u53c2\u6570

input_size = 5
output_size = 2

batch_size = 30
data_size = 100
\u8bbe\u5907

device = torch.device(\"cuda:0\" if torch.cuda.is_available() else \"cpu\")
\u5b9e\u9a8c\uff08\u73a9\u5177\uff09\u6570\u636e

\u751f\u6210\u4e00\u4e2a\u73a9\u5177\u6570\u636e\u3002\u4f60\u53ea\u9700\u8981\u5b9e\u73b0 getitem.

class RandomDataset(Dataset):

def __init__(self, size, length):
self.len = length
self.data = torch.randn(length, size)

def __getitem__(self, index):
return self.data[index]

def __len__(self):
return self.len

rand_loader = DataLoader(dataset=RandomDataset(input_size, data_size),batch_size=batch_size, shuffle=True)
\u7b80\u5355\u6a21\u578b

\u4e3a\u4e86\u505a\u4e00\u4e2a\u5c0f demo\uff0c\u6211\u4eec\u7684\u6a21\u578b\u53ea\u662f\u83b7\u5f97\u4e00\u4e2a\u8f93\u5165\uff0c\u6267\u884c\u4e00\u4e2a\u7ebf\u6027\u64cd\u4f5c\uff0c\u7136\u540e\u7ed9\u4e00\u4e2a\u8f93\u51fa\u3002\u5c3d\u7ba1\u5982\u6b64\uff0c\u4f60\u53ef\u4ee5\u4f7f\u7528 DataParallel \u5728\u4efb\u4f55\u6a21\u578b(CNN, RNN, Capsule Net \u7b49\u7b49.)

\u6211\u4eec\u653e\u7f6e\u4e86\u4e00\u4e2a\u8f93\u51fa\u58f0\u660e\u5728\u6a21\u578b\u4e2d\u6765\u68c0\u6d4b\u8f93\u51fa\u548c\u8f93\u5165\u5f20\u91cf\u7684\u5927\u5c0f\u3002\u8bf7\u6ce8\u610f\u5728 batch rank 0 \u4e2d\u7684\u8f93\u51fa\u3002

class Model(nn.Module):
# Our model

def __init__(self, input_size, output_size):
super(Model, self).__init__()
self.fc = nn.Linear(input_size, output_size)

def forward(self, input):
output = self.fc(input)
print(\"\\tIn Model: input size\", input.size(),
\"output size\", output.size())

return output


\u521b\u5efa\u6a21\u578b\u5e76\u4e14\u6570\u636e\u5e76\u884c\u5904\u7406

\u8fd9\u662f\u6574\u4e2a\u6559\u7a0b\u7684\u6838\u5fc3\u3002\u9996\u5148\u6211\u4eec\u9700\u8981\u4e00\u4e2a\u6a21\u578b\u7684\u5b9e\u4f8b\uff0c\u7136\u540e\u9a8c\u8bc1\u6211\u4eec\u662f\u5426\u6709\u591a\u4e2a GPU\u3002\u5982\u679c\u6211\u4eec\u6709\u591a\u4e2a GPU\uff0c\u6211\u4eec\u53ef\u4ee5\u7528 nn.DataParallel \u6765 \u5305\u88f9 \u6211\u4eec\u7684\u6a21\u578b\u3002\u7136\u540e\u6211\u4eec\u4f7f\u7528 model.to(device) \u628a\u6a21\u578b\u653e\u5230\u591a GPU \u4e2d\u3002



model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
print(\"Let's use\", torch.cuda.device_count(), \"GPUs!\")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)

model.to(device)
\u8f93\u51fa\uff1a

Let's use 2 GPUs!
\u8fd0\u884c\u6a21\u578b\uff1a
\u73b0\u5728\u6211\u4eec\u53ef\u4ee5\u770b\u5230\u8f93\u5165\u548c\u8f93\u51fa\u5f20\u91cf\u7684\u5927\u5c0f\u4e86\u3002
for data in rand_loader:
input = data.to(device)
output = model(input)
print(\"Outside: input size\", input.size(),
\"output_size\", output.size())
\u8f93\u51fa\uff1a

In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])
\u7ed3\u679c\uff1a

\u5982\u679c\u4f60\u6ca1\u6709 GPU \u6216\u8005\u53ea\u6709\u4e00\u4e2a GPU\uff0c\u5f53\u6211\u4eec\u83b7\u53d6 30 \u4e2a\u8f93\u5165\u548c 30 \u4e2a\u8f93\u51fa\uff0c\u6a21\u578b\u5c06\u671f\u671b\u83b7\u5f97 30 \u4e2a\u8f93\u5165\u548c 30 \u4e2a\u8f93\u51fa\u3002\u4f46\u662f\u5982\u679c\u4f60\u6709\u591a\u4e2a GPU\uff0c\u4f60\u4f1a\u83b7\u5f97\u8fd9\u6837\u7684\u7ed3\u679c\u3002

\u591a GPU

\u5982\u679c\u4f60\u6709 2 \u4e2a GPU\uff0c\u4f60\u4f1a\u770b\u5230\uff1a

# on 2 GPUs
Let's use 2 GPUs!
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
In Model: input size torch.Size([15, 5]) output size torch.Size([15, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
In Model: input size torch.Size([5, 5]) output size torch.Size([5, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])


\u5982\u679c\u4f60\u6709 3 \u4e2a GPU\uff0c\u4f60\u4f1a\u770b\u5230\uff1a

Let's use 3 GPUs!
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
In Model: input size torch.Size([10, 5]) output size torch.Size([10, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])
\u5982\u679c\u4f60\u6709 8 \u4e2a GPU\uff0c\u4f60\u4f1a\u770b\u5230\uff1a

Let's use 8 GPUs!
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([4, 5]) output size torch.Size([4, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
Outside: input size torch.Size([30, 5]) output_size torch.Size([30, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
In Model: input size torch.Size([2, 5]) output size torch.Size([2, 2])
Outside: input size torch.Size([10, 5]) output_size torch.Size([10, 2])
\u603b\u7ed3
\u6570\u636e\u5e76\u884c\u81ea\u52a8\u62c6\u5206\u4e86\u4f60\u7684\u6570\u636e\u5e76\u4e14\u5c06\u4efb\u52a1\u5355\u53d1\u9001\u5230\u591a\u4e2a GPU \u4e0a\u3002\u5f53\u6bcf\u4e00\u4e2a\u6a21\u578b\u90fd\u5b8c\u6210\u81ea\u5df1\u7684\u4efb\u52a1\u4e4b\u540e\uff0cDataParallel \u6536\u96c6\u5e76\u4e14\u5408\u5e76\u8fd9\u4e9b\u7ed3\u679c\uff0c\u7136\u540e\u518d\u8fd4\u56de\u7ed9\u4f60\u3002

\u66f4\u591a\u4fe1\u606f\uff0c\u8bf7\u8bbf\u95ee\uff1a
https://pytorch.org/tutorials/beginner/former_torchies/parallelism_tutorial.html

\u4e0b\u8f7d\u5b8c\u6574 Python \u4ee3\u7801\uff1a http://pytorchchina.com/2018/12/11/optional-data-parallelism/" } ] } ubao msn snddm index pchome yahoo rakuten mypaper meadowduck bidyahoo youbao zxmzxm asda bnvcg cvbfg dfscv mmhjk xxddc yybgb zznbn ccubao uaitu acv GXCV ET GDG YH FG BCVB FJFH CBRE CBC GDG ET54 WRWR RWER WREW WRWER RWER SDG EW SF DSFSF fbbs ubao fhd dfg ewr dg df ewwr ewwr et ruyut utut dfg fgd gdfgt etg dfgt dfgd ert4 gd fgg wr 235 wer3 we vsdf sdf gdf ert xcv sdf rwer hfd dfg cvb rwf afb dfh jgh bmn lgh rty gfds cxv xcv xcs vdas fdf fgd cv sdf tert sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf shasha9178 shasha9178 shasha9178 shasha9178 shasha9178 liflif2 liflif2 liflif2 liflif2 liflif2 liblib3 liblib3 liblib3 liblib3 liblib3 zhazha444 zhazha444 zhazha444 zhazha444 zhazha444 dende5 dende denden denden2 denden21 fenfen9 fenf619 fen619 fenfe9 fe619 sdf sdf sdf sdf sdf zhazh90 zhazh0 zhaa50 zha90 zh590 zho zhoz zhozh zhozho zhozho2 lislis lls95 lili95 lils5 liss9 sdf0ty987 sdft876 sdft9876 sdf09876 sd0t9876 sdf0ty98 sdf0976 sdf0ty986 sdf0ty96 sdf0t76 sdf0876 df0ty98 sf0t876 sd0ty76 sdy76 sdf76 sdf0t76 sdf0ty9 sdf0ty98 sdf0ty987 sdf0ty98 sdf6676 sdf876 sd876 sd876 sdf6 sdf6 sdf9876 sdf0t sdf06 sdf0ty9776 sdf0ty9776 sdf0ty76 sdf8876 sdf0t sd6 sdf06 s688876 sd688 sdf86