ResNeSt
Split-Attention Network, A New ResNet Variant. It significantly boosts the performance of downstream models such as Mask R-CNN, Cascade R-CNN and DeepLabV3.
| | crop size | PyTorch | Gluon |
|-------------|-----------|---------|-------|
| ResNeSt-50 | 224 | 81.03 | 81.04 |
| ResNeSt-101 | 256 | 82.83 | 82.81 |
| ResNeSt-200 | 320 | 83.84 | 83.88 |
| ResNeSt-269 | 416 | 84.54 | 84.53 |
Semantic Segmentation
- PyTorch models and training: Please visit [PyTorch Encoding Toolkit](https://hangzhang.org/PyTorch-Encoding/model_zoo/segmentation.html).
- Training with Gluon: Please visit [GluonCV Toolkit](https://gluon-cv.mxnet.io/model_zoo/segmentation.htmlade20k-dataset).
Results on ADE20K
<table class="tg">
<tr>
<th class="tg-cly1">Method</th>
<th class="tg-cly1">Backbone</th>
<th class="tg-cly1">pixAcc%</th>
<th class="tg-cly1">mIoU%</th>
</tr>
<tr>
<td rowspan="6" class="tg-cly1">Deeplab-V3<br></td>
<td class="tg-cly1">ResNet-50</td>
<td class="tg-cly1">80.39</td>
<td class="tg-cly1">42.1</td>
</tr>
<tr>
<td class="tg-cly1">ResNet-101</td>
<td class="tg-cly1">81.11</b></td>
<td class="tg-cly1">44.14</b></td>
</tr>
<tr>
<td class="tg-cly1">ResNeSt-50 (<span style="color:red">ours</span>)</td>
<td class="tg-cly1"><b>81.17</b></td>
<td class="tg-cly1"><b>45.12</b></td>
</tr>
<tr>
<td class="tg-0lax">ResNeSt-101 (<span style="color:red">ours</span>)</td>
<td class="tg-0lax"><b>82.07</td>
<td class="tg-0lax"><b>46.91</b></td>
</tr>
<tr>
<td class="tg-0lax">ResNeSt-200 (<span style="color:red">ours</span>)</td>
<td class="tg-0lax"><b>82.45</td>
<td class="tg-0lax"><b>48.36</b></td>
</tr>
<tr>
<td class="tg-0lax">ResNeSt-269 (<span style="color:red">ours</span>)</td>
<td class="tg-0lax"><b>82.62</td>
<td class="tg-0lax"><b>47.60</b></td>
</tr>
</table>