1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 811 812 813 814 815 816 817 818 819 820 821 822 823 824 825 826 827 828 829 830 831 832 833 834 835 836 837 838 839 840 841 842 843 844 845 846 847 848 849 850 851 852 853 854 855 856 857 858 859 860 861 862 863 864 865 866 867 868 869 870 871 872 873 874 875 876 877 878 879 880 881 882 883 884 885 886 887 888 889 890 891 892 893 894 895 896 897 898 899 900 901 902 903 904 905 906 907 908 909 910 911 912 913 914 915 916 917 918 919 920 921 922 923 924 925 926 927 928 929 930 931 932 933 934 935 936 937 938 939 940 941 942 943 944 945 946 947 948 949 950 951 952 953 954 955 956 957 958 959 960 961 962 963 964 965 966 967 968 969 970 971 972 973 974 975 976 977 978 979 980 981 982 983 984 985 986 987 988 989 990 991 992 993 994 995 996 997 998 999 1000 1001 1002 1003 1004 1005 1006 1007 1008 1009 1010 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022 1023 1024 1025 1026 1027 1028 1029 1030 1031 1032 1033 1034 1035 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047 1048 1049 1050 1051 1052 1053 1054 1055 1056 1057 1058 1059 1060 1061 1062 1063 1064 1065 1066 1067 1068 1069 1070 1071 1072 1073 1074 1075 1076 1077 1078 1079 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091 1092 1093 1094 1095 1096 1097 1098 1099 1100 1101 1102 1103 1104 1105 1106 1107 1108 1109 1110 1111 1112 1113 1114 1115 1116 1117 1118 1119 1120 1121 1122 1123 1124 1125 1126 1127 1128 1129 1130 1131 1132 1133 1134 1135 1136 1137 1138 1139 1140 1141 1142 1143 1144 1145 1146 1147 1148 1149 1150 1151 1152 1153 1154 1155 1156 1157 1158 1159 1160 1161 1162 1163 1164 1165 1166 1167 1168 1169 1170 1171 1172 1173 1174 1175 1176 1177 1178 1179 1180 1181 1182 1183 1184 1185 1186 1187 1188 1189 1190 1191 1192 1193 1194 1195 1196 1197 1198 1199 1200 1201 1202 1203 1204 1205 1206 1207 1208 1209 1210 1211 1212 1213 1214 1215 1216 1217 1218 1219 1220 1221 1222 1223 1224 1225 1226 1227 1228 1229 1230 1231 1232 1233 1234 1235 1236 1237 1238 1239 1240 1241 1242 1243 1244 1245 1246 1247 1248 1249 1250 1251 1252 1253 1254 1255 1256 1257 1258 1259 1260 1261 1262 1263 1264 1265 1266 1267 1268 1269 1270 1271 1272 1273 1274 1275 1276 1277 1278 1279 1280 1281 1282 1283 1284 1285 1286 1287 1288 1289 1290 1291 1292 1293 1294 1295 1296 1297 1298 1299 1300 1301 1302 1303 1304 1305 1306 1307 1308 1309 1310 1311 1312 1313 1314 1315 1316 1317 1318 1319 1320 1321 1322 1323 1324 1325 1326 1327 1328 1329 1330 1331 1332 1333 1334 1335 1336 1337 1338 1339 1340 1341 1342 1343 1344 1345 1346 1347 1348 1349 1350 1351 1352 1353 1354 1355 1356 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 1369 1370 1371 1372 1373 1374 1375 1376 1377 1378 1379 1380 1381 1382 1383 1384 1385 1386 1387 1388 1389 1390 1391 1392 1393 1394 1395 1396 1397 1398 1399 1400 1401 1402 1403 1404 1405 1406 1407 1408 1409 1410 1411 1412 1413 1414 1415 1416 1417 1418 1419 1420 1421 1422 1423 1424 1425 1426 1427 1428 1429 1430 1431 1432 1433 1434 1435 1436 1437 1438 1439 1440 1441 1442 1443 1444 1445 1446 1447 1448 1449 1450 1451 1452 1453 1454 1455 1456 1457 1458 1459 1460 1461 1462 1463 1464 1465 1466 1467 1468 1469 1470 1471 1472 1473 1474 1475 1476 1477 1478 1479 1480 1481 1482 1483 1484 1485 1486 1487 1488 1489 1490 1491 1492 1493 1494 1495 1496 1497 1498 1499 1500 1501 1502 1503 1504 1505 1506 1507 1508 1509 1510 1511 1512 1513 1514 1515 1516 1517 1518 1519 1520 1521 1522 1523 1524 1525 1526 1527 1528 1529 1530 1531 1532 1533 1534 1535 1536 1537 1538 1539 1540 1541 1542 1543 1544 1545 1546 1547 1548 1549 1550 1551 1552 1553 1554 1555 1556 1557 1558 1559 1560 1561 1562 1563 1564 1565 1566 1567 1568 1569 1570 1571 1572 1573 1574 1575 1576 1577 1578 1579 1580 1581 1582 1583 1584 1585 1586 1587 1588 1589 1590 1591 1592 1593 1594 1595 1596 1597 1598 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610 1611 1612 1613 1614 1615 1616 1617 1618 1619 1620 1621 1622 1623 1624 1625 1626 1627 1628 1629 1630 1631 1632 1633 1634 1635 1636 1637 1638 1639 1640 1641 1642 1643 1644 1645 1646 1647 1648 1649 1650 1651 1652 1653 1654 1655 1656 1657 1658 1659 1660 1661 1662 1663 1664 1665 1666 1667 1668 1669 1670 1671 1672 1673 1674 1675 1676 1677 1678 1679 1680 1681 1682 1683 1684 1685 1686 1687 1688 1689 1690 1691 1692 1693 1694 1695 1696 1697 1698 1699 1700 1701 1702 1703 1704 1705 1706 1707 1708 1709 1710 1711 1712 1713 1714 1715 1716 1717 1718 1719 1720 1721 1722 1723 1724 1725 1726 1727 1728 1729 1730 1731 1732 1733 1734 1735 1736 1737 1738 1739 1740 1741 1742 1743 1744 1745 1746 1747 1748 1749 1750 1751 1752 1753 1754 1755 1756 1757 1758 1759 1760 1761 1762 1763 1764 1765 1766 1767 1768 1769 1770 1771 1772 1773 1774 1775 1776 1777 1778 1779 1780 1781 1782 1783 1784 1785 1786 1787 1788 1789 1790 1791 1792 1793 1794 1795 1796 1797 1798 1799 1800 1801 1802 1803 1804 1805 1806 1807 1808 1809 1810 1811 1812 1813 1814 1815 1816 1817 1818 1819 1820 1821 1822 1823 1824 1825 1826 1827 1828 1829 1830 1831 1832 1833 1834 1835 1836 1837 1838 1839 1840 1841 1842 1843 1844 1845 1846 1847 1848 1849 1850 1851 1852 1853 1854 1855 1856 1857 1858 1859 1860 1861 1862 1863 1864 1865 1866 1867 1868 1869 1870 1871 1872 1873 1874 1875 1876 1877 1878 1879 1880 1881 1882 1883 1884 1885 1886 1887 1888 1889 1890 1891 1892 1893 1894 1895 1896 1897 1898 1899 1900 1901 1902 1903 1904 1905 1906 1907 1908 1909 1910 1911 1912 1913 1914 1915 1916 1917 1918 1919 1920 1921 1922 1923 1924 1925 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 2027 2028 2029 2030 2031 2032 2033 2034 2035 2036 2037 2038 2039 2040 2041 2042 2043 2044 2045 2046 2047 2048 2049 2050 2051 2052 2053 2054 2055 2056 2057 2058 2059 2060 2061 2062 2063 2064 2065 2066 2067 2068 2069 2070 2071 2072 2073 2074 2075 2076 2077 2078 2079 2080 2081 2082 2083 2084 2085 2086 2087 2088 2089 2090 2091 2092 2093 2094 2095 2096 2097 2098 2099 2100 2101 2102 2103 2104 2105 2106 2107 2108 2109 2110 2111 2112 2113 2114 2115 2116 2117 2118 2119 2120 2121 2122 2123 2124 2125 2126 2127 2128 2129 2130 2131 2132 2133 2134 2135 2136 2137 2138 2139 2140 2141 2142 2143 2144 2145 2146 2147 2148 2149 2150 2151 2152 2153 2154 2155 2156 2157 2158 2159 2160 2161 2162 2163 2164 2165 2166 2167 2168 2169 2170 2171 2172 2173 2174 2175 2176 2177 2178 2179 2180 2181 2182 2183 2184 2185 2186 2187 2188 2189 2190 2191 2192 2193 2194 2195 2196 2197 2198 2199 2200 2201 2202 2203 2204 2205 2206 2207 2208 2209 2210 2211 2212 2213 2214 2215 2216 2217 2218 2219 2220 2221 2222 2223 2224 2225 2226 2227 2228 2229 2230 2231 2232 2233 2234 2235 2236 2237 2238 2239 2240 2241 2242 2243 2244 2245 2246 2247 2248 2249 2250 2251 2252 2253 2254 2255 2256 2257 2258 2259 2260 2261 2262 2263 2264 2265 2266 2267 2268 2269 2270 2271 2272 2273 2274 2275 2276 2277 2278 2279 2280 2281 2282 2283 2284 2285 2286 2287 2288 2289 2290 2291 2292 2293 2294 2295 2296 2297 2298 2299 2300 2301 2302 2303 2304 2305 2306 2307 2308 2309 2310 2311 2312 2313 2314 2315 2316 2317 2318 2319 2320 2321 2322 2323 2324 2325 2326 2327 2328 2329 2330 2331 2332 2333 2334 2335 2336 2337 2338 2339 2340 2341 2342 2343 2344 2345 2346 2347 2348 2349 2350 2351 2352 2353 2354 2355 2356 2357 2358 2359 2360 2361 2362 2363 2364 2365 2366 2367 2368 2369 2370 2371 2372 2373 2374 2375 2376 2377 2378 2379 2380 2381 2382 2383 2384 2385 2386 2387 2388 2389 2390 2391 2392 2393 2394 2395 2396 2397 2398 2399 2400 2401 2402 2403 2404 2405 2406 2407 2408 2409 2410 2411 2412 2413 2414 2415 2416 2417 2418 2419 2420 2421 2422 2423 2424 2425 2426 2427 2428 2429 2430 2431 2432 2433 2434 2435 2436 2437 2438 2439 2440 2441 2442 2443 2444 2445 2446 2447 2448 2449 2450 2451 2452 2453 2454 2455 2456 2457 2458 2459 2460 2461 2462 2463 2464 2465 2466 2467 2468 2469 2470 2471 2472 2473 2474 2475 2476 2477 2478 2479 2480 2481 2482 2483 2484 2485 2486 2487 2488 2489 2490 2491 2492 2493 2494 2495 2496 2497
use crate::component::func::{Func, LiftContext, LowerContext, Options};
use crate::component::matching::InstanceType;
use crate::component::storage::{storage_as_slice, storage_as_slice_mut};
use crate::prelude::*;
use crate::runtime::vm::component::ComponentInstance;
use crate::runtime::vm::SendSyncPtr;
use crate::{AsContextMut, StoreContext, StoreContextMut, ValRaw};
use alloc::borrow::Cow;
use alloc::sync::Arc;
use core::fmt;
use core::marker;
use core::mem::{self, MaybeUninit};
use core::ptr::NonNull;
use core::str;
use wasmtime_environ::component::{
CanonicalAbiInfo, ComponentTypes, InterfaceType, StringEncoding, VariantInfo, MAX_FLAT_PARAMS,
MAX_FLAT_RESULTS,
};
/// A statically-typed version of [`Func`] which takes `Params` as input and
/// returns `Return`.
///
/// This is an efficient way to invoke a WebAssembly component where if the
/// inputs and output are statically known this can eschew the vast majority of
/// machinery and checks when calling WebAssembly. This is the most optimized
/// way to call a WebAssembly component.
///
/// Note that like [`Func`] this is a pointer within a [`Store`](crate::Store)
/// and usage will panic if used with the wrong store.
///
/// This type is primarily created with the [`Func::typed`] API.
///
/// See [`ComponentType`] for more information about supported types.
pub struct TypedFunc<Params, Return> {
func: Func,
// The definition of this field is somewhat subtle and may be surprising.
// Naively one might expect something like
//
// _marker: marker::PhantomData<fn(Params) -> Return>,
//
// Since this is a function pointer after all. The problem with this
// definition though is that it imposes the wrong variance on `Params` from
// what we want. Abstractly a `fn(Params)` is able to store `Params` within
// it meaning you can only give it `Params` that live longer than the
// function pointer.
//
// With a component model function, however, we're always copying data from
// the host into the guest, so we are never storing pointers to `Params`
// into the guest outside the duration of a `call`, meaning we can actually
// accept values in `TypedFunc::call` which live for a shorter duration
// than the `Params` argument on the struct.
//
// This all means that we don't use a phantom function pointer, but instead
// feign phantom storage here to get the variance desired.
_marker: marker::PhantomData<(Params, Return)>,
}
impl<Params, Return> Copy for TypedFunc<Params, Return> {}
impl<Params, Return> Clone for TypedFunc<Params, Return> {
fn clone(&self) -> TypedFunc<Params, Return> {
*self
}
}
impl<Params, Return> TypedFunc<Params, Return>
where
Params: ComponentNamedList + Lower,
Return: ComponentNamedList + Lift,
{
/// Creates a new [`TypedFunc`] from the provided component [`Func`],
/// unsafely asserting that the underlying function takes `Params` as
/// input and returns `Return`.
///
/// # Unsafety
///
/// This is an unsafe function because it does not verify that the [`Func`]
/// provided actually implements this signature. It's up to the caller to
/// have performed some other sort of check to ensure that the signature is
/// correct.
pub unsafe fn new_unchecked(func: Func) -> TypedFunc<Params, Return> {
TypedFunc {
_marker: marker::PhantomData,
func,
}
}
/// Returns the underlying un-typed [`Func`] that this [`TypedFunc`]
/// references.
pub fn func(&self) -> &Func {
&self.func
}
/// Calls the underlying WebAssembly component function using the provided
/// `params` as input.
///
/// This method is used to enter into a component. Execution happens within
/// the `store` provided. The `params` are copied into WebAssembly memory
/// as appropriate and a core wasm function is invoked.
///
/// # Post-return
///
/// In the component model each function can have a "post return" specified
/// which allows cleaning up the arguments returned to the host. For example
/// if WebAssembly returns a string to the host then it might be a uniquely
/// allocated string which, after the host finishes processing it, needs to
/// be deallocated in the wasm instance's own linear memory to prevent
/// memory leaks in wasm itself. The `post-return` canonical abi option is
/// used to configured this.
///
/// To accommodate this feature of the component model after invoking a
/// function via [`TypedFunc::call`] you must next invoke
/// [`TypedFunc::post_return`]. Note that the return value of the function
/// should be processed between these two function calls. The return value
/// continues to be usable from an embedder's perspective after
/// `post_return` is called, but after `post_return` is invoked it may no
/// longer retain the same value that the wasm module originally returned.
///
/// Also note that [`TypedFunc::post_return`] must be invoked irrespective
/// of whether the canonical ABI option `post-return` was configured or not.
/// This means that embedders must unconditionally call
/// [`TypedFunc::post_return`] when a function returns. If this function
/// call returns an error, however, then [`TypedFunc::post_return`] is not
/// required.
///
/// # Errors
///
/// This function can return an error for a number of reasons:
///
/// * If the wasm itself traps during execution.
/// * If the wasm traps while copying arguments into memory.
/// * If the wasm provides bad allocation pointers when copying arguments
/// into memory.
/// * If the wasm returns a value which violates the canonical ABI.
/// * If this function's instances cannot be entered, for example if the
/// instance is currently calling a host function.
/// * If a previous function call occurred and the corresponding
/// `post_return` hasn't been invoked yet.
///
/// In general there are many ways that things could go wrong when copying
/// types in and out of a wasm module with the canonical ABI, and certain
/// error conditions are specific to certain types. For example a
/// WebAssembly module can't return an invalid `char`. When allocating space
/// for this host to copy a string into the returned pointer must be
/// in-bounds in memory.
///
/// If an error happens then the error should contain detailed enough
/// information to understand which part of the canonical ABI went wrong
/// and what to inspect.
///
/// # Panics
///
/// Panics if this is called on a function in an asynchronous store. This
/// only works with functions defined within a synchronous store. Also
/// panics if `store` does not own this function.
pub fn call(&self, store: impl AsContextMut, params: Params) -> Result<Return> {
assert!(
!store.as_context().async_support(),
"must use `call_async` when async support is enabled on the config"
);
self.call_impl(store, params)
}
/// Exactly like [`Self::call`], except for use on asynchronous stores.
///
/// # Panics
///
/// Panics if this is called on a function in a synchronous store. This
/// only works with functions defined within an asynchronous store. Also
/// panics if `store` does not own this function.
#[cfg(feature = "async")]
pub async fn call_async<T>(
&self,
mut store: impl AsContextMut<Data = T>,
params: Params,
) -> Result<Return>
where
T: Send,
Params: Send + Sync,
Return: Send + Sync,
{
let mut store = store.as_context_mut();
assert!(
store.0.async_support(),
"cannot use `call_async` when async support is not enabled on the config"
);
store
.on_fiber(|store| self.call_impl(store, params))
.await?
}
fn call_impl(&self, mut store: impl AsContextMut, params: Params) -> Result<Return> {
let store = &mut store.as_context_mut();
// Note that this is in theory simpler than it might read at this time.
// Here we're doing a runtime dispatch on the `flatten_count` for the
// params/results to see whether they're inbounds. This creates 4 cases
// to handle. In reality this is a highly optimizable branch where LLVM
// will easily figure out that only one branch here is taken.
//
// Otherwise this current construction is done to ensure that the stack
// space reserved for the params/results is always of the appropriate
// size (as the params/results needed differ depending on the "flatten"
// count)
if Params::flatten_count() <= MAX_FLAT_PARAMS {
if Return::flatten_count() <= MAX_FLAT_RESULTS {
self.func.call_raw(
store,
¶ms,
Self::lower_stack_args,
Self::lift_stack_result,
)
} else {
self.func.call_raw(
store,
¶ms,
Self::lower_stack_args,
Self::lift_heap_result,
)
}
} else {
if Return::flatten_count() <= MAX_FLAT_RESULTS {
self.func.call_raw(
store,
¶ms,
Self::lower_heap_args,
Self::lift_stack_result,
)
} else {
self.func.call_raw(
store,
¶ms,
Self::lower_heap_args,
Self::lift_heap_result,
)
}
}
}
/// Lower parameters directly onto the stack specified by the `dst`
/// location.
///
/// This is only valid to call when the "flatten count" is small enough, or
/// when the canonical ABI says arguments go through the stack rather than
/// the heap.
fn lower_stack_args<T>(
cx: &mut LowerContext<'_, T>,
params: &Params,
ty: InterfaceType,
dst: &mut MaybeUninit<Params::Lower>,
) -> Result<()> {
assert!(Params::flatten_count() <= MAX_FLAT_PARAMS);
params.lower(cx, ty, dst)?;
Ok(())
}
/// Lower parameters onto a heap-allocated location.
///
/// This is used when the stack space to be used for the arguments is above
/// the `MAX_FLAT_PARAMS` threshold. Here the wasm's `realloc` function is
/// invoked to allocate space and then parameters are stored at that heap
/// pointer location.
fn lower_heap_args<T>(
cx: &mut LowerContext<'_, T>,
params: &Params,
ty: InterfaceType,
dst: &mut MaybeUninit<ValRaw>,
) -> Result<()> {
assert!(Params::flatten_count() > MAX_FLAT_PARAMS);
// Memory must exist via validation if the arguments are stored on the
// heap, so we can create a `MemoryMut` at this point. Afterwards
// `realloc` is used to allocate space for all the arguments and then
// they're all stored in linear memory.
//
// Note that `realloc` will bake in a check that the returned pointer is
// in-bounds.
let ptr = cx.realloc(0, 0, Params::ALIGN32, Params::SIZE32)?;
params.store(cx, ty, ptr)?;
// Note that the pointer here is stored as a 64-bit integer. This allows
// this to work with either 32 or 64-bit memories. For a 32-bit memory
// it'll just ignore the upper 32 zero bits, and for 64-bit memories
// this'll have the full 64-bits. Note that for 32-bit memories the call
// to `realloc` above guarantees that the `ptr` is in-bounds meaning
// that we will know that the zero-extended upper bits of `ptr` are
// guaranteed to be zero.
//
// This comment about 64-bit integers is also referred to below with
// "WRITEPTR64".
dst.write(ValRaw::i64(ptr as i64));
Ok(())
}
/// Lift the result of a function directly from the stack result.
///
/// This is only used when the result fits in the maximum number of stack
/// slots.
fn lift_stack_result(
cx: &mut LiftContext<'_>,
ty: InterfaceType,
dst: &Return::Lower,
) -> Result<Return> {
assert!(Return::flatten_count() <= MAX_FLAT_RESULTS);
Return::lift(cx, ty, dst)
}
/// Lift the result of a function where the result is stored indirectly on
/// the heap.
fn lift_heap_result(
cx: &mut LiftContext<'_>,
ty: InterfaceType,
dst: &ValRaw,
) -> Result<Return> {
assert!(Return::flatten_count() > MAX_FLAT_RESULTS);
// FIXME: needs to read an i64 for memory64
let ptr = usize::try_from(dst.get_u32()).err2anyhow()?;
if ptr % usize::try_from(Return::ALIGN32).err2anyhow()? != 0 {
bail!("return pointer not aligned");
}
let bytes = cx
.memory()
.get(ptr..)
.and_then(|b| b.get(..Return::SIZE32))
.ok_or_else(|| anyhow::anyhow!("pointer out of bounds of memory"))?;
Return::load(cx, ty, bytes)
}
/// See [`Func::post_return`]
pub fn post_return(&self, store: impl AsContextMut) -> Result<()> {
self.func.post_return(store)
}
/// See [`Func::post_return_async`]
#[cfg(feature = "async")]
pub async fn post_return_async<T: Send>(
&self,
store: impl AsContextMut<Data = T>,
) -> Result<()> {
self.func.post_return_async(store).await
}
}
/// A trait representing a static list of named types that can be passed to or
/// returned from a [`TypedFunc`].
///
/// This trait is implemented for a number of tuple types and is not expected
/// to be implemented externally. The contents of this trait are hidden as it's
/// intended to be an implementation detail of Wasmtime. The contents of this
/// trait are not covered by Wasmtime's stability guarantees.
///
/// For more information about this trait see [`Func::typed`] and
/// [`TypedFunc`].
//
// Note that this is an `unsafe` trait, and the unsafety means that
// implementations of this trait must be correct or otherwise [`TypedFunc`]
// would not be memory safe. The main reason this is `unsafe` is the
// `typecheck` function which must operate correctly relative to the `AsTuple`
// interpretation of the implementor.
pub unsafe trait ComponentNamedList: ComponentType {}
/// A trait representing types which can be passed to and read from components
/// with the canonical ABI.
///
/// This trait is implemented for Rust types which can be communicated to
/// components. The [`Func::typed`] and [`TypedFunc`] Rust items are the main
/// consumers of this trait.
///
/// Supported Rust types include:
///
/// | Component Model Type | Rust Type |
/// |-----------------------------------|--------------------------------------|
/// | `{s,u}{8,16,32,64}` | `{i,u}{8,16,32,64}` |
/// | `f{32,64}` | `f{32,64}` |
/// | `bool` | `bool` |
/// | `char` | `char` |
/// | `tuple<A, B>` | `(A, B)` |
/// | `option<T>` | `Option<T>` |
/// | `result` | `Result<(), ()>` |
/// | `result<T>` | `Result<T, ()>` |
/// | `result<_, E>` | `Result<(), E>` |
/// | `result<T, E>` | `Result<T, E>` |
/// | `string` | `String`, `&str`, or [`WasmStr`] |
/// | `list<T>` | `Vec<T>`, `&[T]`, or [`WasmList`] |
/// | `own<T>`, `borrow<T>` | [`Resource<T>`] or [`ResourceAny`] |
/// | `record` | [`#[derive(ComponentType)]`][d-cm] |
/// | `variant` | [`#[derive(ComponentType)]`][d-cm] |
/// | `enum` | [`#[derive(ComponentType)]`][d-cm] |
/// | `flags` | [`flags!`][f-m] |
///
/// [`Resource<T>`]: crate::component::Resource
/// [`ResourceAny`]: crate::component::ResourceAny
/// [d-cm]: macro@crate::component::ComponentType
/// [f-m]: crate::component::flags
///
/// Rust standard library pointers such as `&T`, `Box<T>`, `Rc<T>`, and `Arc<T>`
/// additionally represent whatever type `T` represents in the component model.
/// Note that types such as `record`, `variant`, `enum`, and `flags` are
/// generated by the embedder at compile time. These macros derive
/// implementation of this trait for custom types to map to custom types in the
/// component model. Note that for `record`, `variant`, `enum`, and `flags`
/// those types are often generated by the
/// [`bindgen!`](crate::component::bindgen) macro from WIT definitions.
///
/// Types that implement [`ComponentType`] are used for `Params` and `Return`
/// in [`TypedFunc`] and [`Func::typed`].
///
/// The contents of this trait are hidden as it's intended to be an
/// implementation detail of Wasmtime. The contents of this trait are not
/// covered by Wasmtime's stability guarantees.
//
// Note that this is an `unsafe` trait as `TypedFunc`'s safety heavily relies on
// the correctness of the implementations of this trait. Some ways in which this
// trait must be correct to be safe are:
//
// * The `Lower` associated type must be a `ValRaw` sequence. It doesn't have to
// literally be `[ValRaw; N]` but when laid out in memory it must be adjacent
// `ValRaw` values and have a multiple of the size of `ValRaw` and the same
// alignment.
//
// * The `lower` function must initialize the bits within `Lower` that are going
// to be read by the trampoline that's used to enter core wasm. A trampoline
// is passed `*mut Lower` and will read the canonical abi arguments in
// sequence, so all of the bits must be correctly initialized.
//
// * The `size` and `align` functions must be correct for this value stored in
// the canonical ABI. The `Cursor<T>` iteration of these bytes rely on this
// for correctness as they otherwise eschew bounds-checking.
//
// There are likely some other correctness issues which aren't documented as
// well, this isn't intended to be an exhaustive list. It suffices to say,
// though, that correctness bugs in this trait implementation are highly likely
// to lead to security bugs, which again leads to the `unsafe` in the trait.
//
// Also note that this trait specifically is not sealed because we have a proc
// macro that generates implementations of this trait for external types in a
// `#[derive]`-like fashion.
pub unsafe trait ComponentType {
/// Representation of the "lowered" form of this component value.
///
/// Lowerings lower into core wasm values which are represented by `ValRaw`.
/// This `Lower` type must be a list of `ValRaw` as either a literal array
/// or a struct where every field is a `ValRaw`. This must be `Copy` (as
/// `ValRaw` is `Copy`) and support all byte patterns. This being correct is
/// one reason why the trait is unsafe.
#[doc(hidden)]
type Lower: Copy;
/// The information about this type's canonical ABI (size/align/etc).
#[doc(hidden)]
const ABI: CanonicalAbiInfo;
#[doc(hidden)]
const SIZE32: usize = Self::ABI.size32 as usize;
#[doc(hidden)]
const ALIGN32: u32 = Self::ABI.align32;
#[doc(hidden)]
const IS_RUST_UNIT_TYPE: bool = false;
/// Returns the number of core wasm abi values will be used to represent
/// this type in its lowered form.
///
/// This divides the size of `Self::Lower` by the size of `ValRaw`.
#[doc(hidden)]
fn flatten_count() -> usize {
assert!(mem::size_of::<Self::Lower>() % mem::size_of::<ValRaw>() == 0);
assert!(mem::align_of::<Self::Lower>() == mem::align_of::<ValRaw>());
mem::size_of::<Self::Lower>() / mem::size_of::<ValRaw>()
}
/// Performs a type-check to see whether this component value type matches
/// the interface type `ty` provided.
#[doc(hidden)]
fn typecheck(ty: &InterfaceType, types: &InstanceType<'_>) -> Result<()>;
}
#[doc(hidden)]
pub unsafe trait ComponentVariant: ComponentType {
const CASES: &'static [Option<CanonicalAbiInfo>];
const INFO: VariantInfo = VariantInfo::new_static(Self::CASES);
const PAYLOAD_OFFSET32: usize = Self::INFO.payload_offset32 as usize;
}
/// Host types which can be passed to WebAssembly components.
///
/// This trait is implemented for all types that can be passed to components
/// either as parameters of component exports or returns of component imports.
/// This trait represents the ability to convert from the native host
/// representation to the canonical ABI.
///
/// Built-in types to Rust such as `Option<T>` implement this trait as
/// appropriate. For a mapping of component model to Rust types see
/// [`ComponentType`].
///
/// For user-defined types, for example `record` types mapped to Rust `struct`s,
/// this crate additionally has
/// [`#[derive(Lower)]`](macro@crate::component::Lower).
///
/// Note that like [`ComponentType`] the definition of this trait is intended to
/// be an internal implementation detail of Wasmtime at this time. It's
/// recommended to use the `#[derive(Lower)]` implementation instead.
pub unsafe trait Lower: ComponentType {
/// Performs the "lower" function in the canonical ABI.
///
/// This method will lower the current value into a component. The `lower`
/// function performs a "flat" lowering into the `dst` specified which is
/// allowed to be uninitialized entering this method but is guaranteed to be
/// fully initialized if the method returns `Ok(())`.
///
/// The `cx` context provided is the context within which this lowering is
/// happening. This contains information such as canonical options specified
/// (e.g. string encodings, memories, etc), the store itself, along with
/// type information.
///
/// The `ty` parameter is the destination type that is being lowered into.
/// For example this is the component's "view" of the type that is being
/// lowered. This is guaranteed to have passed a `typecheck` earlier.
///
/// This will only be called if `typecheck` passes for `Op::Lower`.
#[doc(hidden)]
fn lower<T>(
&self,
cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
dst: &mut MaybeUninit<Self::Lower>,
) -> Result<()>;
/// Performs the "store" operation in the canonical ABI.
///
/// This function will store `self` into the linear memory described by
/// `cx` at the `offset` provided.
///
/// It is expected that `offset` is a valid offset in memory for
/// `Self::SIZE32` bytes. At this time that's not an unsafe contract as it's
/// always re-checked on all stores, but this is something that will need to
/// be improved in the future to remove extra bounds checks. For now this
/// function will panic if there's a bug and `offset` isn't valid within
/// memory.
///
/// The `ty` type information passed here is the same as the type
/// information passed to `lower` above, and is the component's own view of
/// what the resulting type should be.
///
/// This will only be called if `typecheck` passes for `Op::Lower`.
#[doc(hidden)]
fn store<T>(
&self,
cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
offset: usize,
) -> Result<()>;
/// Provided method to lower a list of `Self` into memory.
///
/// Requires that `offset` has already been checked for alignment and
/// validity in terms of being in-bounds, otherwise this may panic.
///
/// This is primarily here to get overridden for implementations of integers
/// which can avoid some extra fluff and use a pattern that's more easily
/// optimizable by LLVM.
#[doc(hidden)]
fn store_list<T>(
cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
mut offset: usize,
items: &[Self],
) -> Result<()>
where
Self: Sized,
{
for item in items {
item.store(cx, ty, offset)?;
offset += Self::SIZE32;
}
Ok(())
}
}
/// Host types which can be created from the canonical ABI.
///
/// This is the mirror of the [`Lower`] trait where it represents the capability
/// of acquiring items from WebAssembly and passing them to the host.
///
/// Built-in types to Rust such as `Option<T>` implement this trait as
/// appropriate. For a mapping of component model to Rust types see
/// [`ComponentType`].
///
/// For user-defined types, for example `record` types mapped to Rust `struct`s,
/// this crate additionally has
/// [`#[derive(Lift)]`](macro@crate::component::Lift).
///
/// Note that like [`ComponentType`] the definition of this trait is intended to
/// be an internal implementation detail of Wasmtime at this time. It's
/// recommended to use the `#[derive(Lift)]` implementation instead.
pub unsafe trait Lift: Sized + ComponentType {
/// Performs the "lift" operation in the canonical ABI.
///
/// This function performs a "flat" lift operation from the `src` specified
/// which is a sequence of core wasm values. The lifting operation will
/// validate core wasm values and produce a `Self` on success.
///
/// The `cx` provided contains contextual information such as the store
/// that's being loaded from, canonical options, and type information.
///
/// The `ty` parameter is the origin component's specification for what the
/// type that is being lifted is. For example this is the record type or the
/// resource type that is being lifted.
///
/// Note that this has a default implementation but if `typecheck` passes
/// for `Op::Lift` this needs to be overridden.
#[doc(hidden)]
fn lift(cx: &mut LiftContext<'_>, ty: InterfaceType, src: &Self::Lower) -> Result<Self>;
/// Performs the "load" operation in the canonical ABI.
///
/// This will read the `bytes` provided, which are a sub-slice into the
/// linear memory described by `cx`. The `bytes` array provided is
/// guaranteed to be `Self::SIZE32` bytes large. All of memory is then also
/// available through `cx` for bounds-checks and such as necessary for
/// strings/lists.
///
/// The `ty` argument is the type that's being loaded, as described by the
/// original component.
///
/// Note that this has a default implementation but if `typecheck` passes
/// for `Op::Lift` this needs to be overridden.
#[doc(hidden)]
fn load(cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self>;
/// Converts `list` into a `Vec<T>`, used in `Lift for Vec<T>`.
///
/// This is primarily here to get overridden for implementations of integers
/// which can avoid some extra fluff and use a pattern that's more easily
/// optimizable by LLVM.
#[doc(hidden)]
fn load_list(cx: &mut LiftContext<'_>, list: &WasmList<Self>) -> Result<Vec<Self>>
where
Self: Sized,
{
(0..list.len)
.map(|index| list.get_from_store(cx, index).unwrap())
.collect()
}
}
// Macro to help generate "forwarding implementations" of `ComponentType` to
// another type, used for wrappers in Rust like `&T`, `Box<T>`, etc. Note that
// these wrappers only implement lowering because lifting native Rust types
// cannot be done.
macro_rules! forward_type_impls {
($(($($generics:tt)*) $a:ty => $b:ty,)*) => ($(
unsafe impl <$($generics)*> ComponentType for $a {
type Lower = <$b as ComponentType>::Lower;
const ABI: CanonicalAbiInfo = <$b as ComponentType>::ABI;
#[inline]
fn typecheck(ty: &InterfaceType, types: &InstanceType<'_>) -> Result<()> {
<$b as ComponentType>::typecheck(ty, types)
}
}
)*)
}
forward_type_impls! {
(T: ComponentType + ?Sized) &'_ T => T,
(T: ComponentType + ?Sized) Box<T> => T,
(T: ComponentType + ?Sized) alloc::rc::Rc<T> => T,
(T: ComponentType + ?Sized) alloc::sync::Arc<T> => T,
() String => str,
(T: ComponentType) Vec<T> => [T],
}
macro_rules! forward_lowers {
($(($($generics:tt)*) $a:ty => $b:ty,)*) => ($(
unsafe impl <$($generics)*> Lower for $a {
fn lower<U>(
&self,
cx: &mut LowerContext<'_, U>,
ty: InterfaceType,
dst: &mut MaybeUninit<Self::Lower>,
) -> Result<()> {
<$b as Lower>::lower(self, cx, ty, dst)
}
fn store<U>(
&self,
cx: &mut LowerContext<'_, U>,
ty: InterfaceType,
offset: usize,
) -> Result<()> {
<$b as Lower>::store(self, cx, ty, offset)
}
}
)*)
}
forward_lowers! {
(T: Lower + ?Sized) &'_ T => T,
(T: Lower + ?Sized) Box<T> => T,
(T: Lower + ?Sized) alloc::rc::Rc<T> => T,
(T: Lower + ?Sized) alloc::sync::Arc<T> => T,
() String => str,
(T: Lower) Vec<T> => [T],
}
macro_rules! forward_string_lifts {
($($a:ty,)*) => ($(
unsafe impl Lift for $a {
#[inline]
fn lift(cx: &mut LiftContext<'_>, ty: InterfaceType, src: &Self::Lower) -> Result<Self> {
Ok(<WasmStr as Lift>::lift(cx, ty, src)?.to_str_from_memory(cx.memory())?.into())
}
#[inline]
fn load(cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self> {
Ok(<WasmStr as Lift>::load(cx, ty, bytes)?.to_str_from_memory(cx.memory())?.into())
}
}
)*)
}
forward_string_lifts! {
Box<str>,
alloc::rc::Rc<str>,
alloc::sync::Arc<str>,
String,
}
macro_rules! forward_list_lifts {
($($a:ty,)*) => ($(
unsafe impl <T: Lift> Lift for $a {
fn lift(cx: &mut LiftContext<'_>, ty: InterfaceType, src: &Self::Lower) -> Result<Self> {
let list = <WasmList::<T> as Lift>::lift(cx, ty, src)?;
Ok(T::load_list(cx, &list)?.into())
}
fn load(cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self> {
let list = <WasmList::<T> as Lift>::load(cx, ty, bytes)?;
Ok(T::load_list(cx, &list)?.into())
}
}
)*)
}
forward_list_lifts! {
Box<[T]>,
alloc::rc::Rc<[T]>,
alloc::sync::Arc<[T]>,
Vec<T>,
}
// Macro to help generate `ComponentType` implementations for primitive types
// such as integers, char, bool, etc.
macro_rules! integers {
($($primitive:ident = $ty:ident in $field:ident/$get:ident with abi:$abi:ident,)*) => ($(
unsafe impl ComponentType for $primitive {
type Lower = ValRaw;
const ABI: CanonicalAbiInfo = CanonicalAbiInfo::$abi;
fn typecheck(ty: &InterfaceType, _types: &InstanceType<'_>) -> Result<()> {
match ty {
InterfaceType::$ty => Ok(()),
other => bail!("expected `{}` found `{}`", desc(&InterfaceType::$ty), desc(other))
}
}
}
unsafe impl Lower for $primitive {
#[inline]
#[allow(trivial_numeric_casts)]
fn lower<T>(
&self,
_cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
dst: &mut MaybeUninit<Self::Lower>,
) -> Result<()> {
debug_assert!(matches!(ty, InterfaceType::$ty));
dst.write(ValRaw::$field(*self as $field));
Ok(())
}
#[inline]
fn store<T>(
&self,
cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
offset: usize,
) -> Result<()> {
debug_assert!(matches!(ty, InterfaceType::$ty));
debug_assert!(offset % Self::SIZE32 == 0);
*cx.get(offset) = self.to_le_bytes();
Ok(())
}
fn store_list<T>(
cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
offset: usize,
items: &[Self],
) -> Result<()> {
debug_assert!(matches!(ty, InterfaceType::$ty));
// Double-check that the CM alignment is at least the host's
// alignment for this type which should be true for all
// platforms.
assert!((Self::ALIGN32 as usize) >= mem::align_of::<Self>());
// Slice `cx`'s memory to the window that we'll be modifying.
// This should all have already been verified in terms of
// alignment and sizing meaning that these assertions here are
// not truly necessary but are instead double-checks.
//
// Note that we're casting a `[u8]` slice to `[Self]` with
// `align_to_mut` which is not safe in general but is safe in
// our specific case as all `u8` patterns are valid `Self`
// patterns since `Self` is an integral type.
let dst = &mut cx.as_slice_mut()[offset..][..items.len() * Self::SIZE32];
let (before, middle, end) = unsafe { dst.align_to_mut::<Self>() };
assert!(before.is_empty() && end.is_empty());
assert_eq!(middle.len(), items.len());
// And with all that out of the way perform the copying loop.
// This is not a `copy_from_slice` because endianness needs to
// be handled here, but LLVM should pretty easily transform this
// into a memcpy on little-endian platforms.
for (dst, src) in middle.iter_mut().zip(items) {
*dst = src.to_le();
}
Ok(())
}
}
unsafe impl Lift for $primitive {
#[inline]
#[allow(trivial_numeric_casts, clippy::cast_possible_truncation)]
fn lift(_cx: &mut LiftContext<'_>, ty: InterfaceType, src: &Self::Lower) -> Result<Self> {
debug_assert!(matches!(ty, InterfaceType::$ty));
Ok(src.$get() as $primitive)
}
#[inline]
fn load(_cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self> {
debug_assert!(matches!(ty, InterfaceType::$ty));
debug_assert!((bytes.as_ptr() as usize) % Self::SIZE32 == 0);
Ok($primitive::from_le_bytes(bytes.try_into().unwrap()))
}
fn load_list(cx: &mut LiftContext<'_>, list: &WasmList<Self>) -> Result<Vec<Self>> {
Ok(
list._as_le_slice(cx.memory())
.iter()
.map(|i| Self::from_le(*i))
.collect(),
)
}
}
)*)
}
integers! {
i8 = S8 in i32/get_i32 with abi:SCALAR1,
u8 = U8 in u32/get_u32 with abi:SCALAR1,
i16 = S16 in i32/get_i32 with abi:SCALAR2,
u16 = U16 in u32/get_u32 with abi:SCALAR2,
i32 = S32 in i32/get_i32 with abi:SCALAR4,
u32 = U32 in u32/get_u32 with abi:SCALAR4,
i64 = S64 in i64/get_i64 with abi:SCALAR8,
u64 = U64 in u64/get_u64 with abi:SCALAR8,
}
macro_rules! floats {
($($float:ident/$get_float:ident = $ty:ident with abi:$abi:ident)*) => ($(const _: () = {
/// All floats in-and-out of the canonical abi always have their nan
/// payloads canonicalized. conveniently the `NAN` constant in rust has
/// the same representation as canonical nan, so we can use that for the
/// nan value.
#[inline]
fn canonicalize(float: $float) -> $float {
if float.is_nan() {
$float::NAN
} else {
float
}
}
unsafe impl ComponentType for $float {
type Lower = ValRaw;
const ABI: CanonicalAbiInfo = CanonicalAbiInfo::$abi;
fn typecheck(ty: &InterfaceType, _types: &InstanceType<'_>) -> Result<()> {
match ty {
InterfaceType::$ty => Ok(()),
other => bail!("expected `{}` found `{}`", desc(&InterfaceType::$ty), desc(other))
}
}
}
unsafe impl Lower for $float {
#[inline]
fn lower<T>(
&self,
_cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
dst: &mut MaybeUninit<Self::Lower>,
) -> Result<()> {
debug_assert!(matches!(ty, InterfaceType::$ty));
dst.write(ValRaw::$float(canonicalize(*self).to_bits()));
Ok(())
}
#[inline]
fn store<T>(
&self,
cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
offset: usize,
) -> Result<()> {
debug_assert!(matches!(ty, InterfaceType::$ty));
debug_assert!(offset % Self::SIZE32 == 0);
let ptr = cx.get(offset);
*ptr = canonicalize(*self).to_bits().to_le_bytes();
Ok(())
}
}
unsafe impl Lift for $float {
#[inline]
fn lift(_cx: &mut LiftContext<'_>, ty: InterfaceType, src: &Self::Lower) -> Result<Self> {
debug_assert!(matches!(ty, InterfaceType::$ty));
Ok(canonicalize($float::from_bits(src.$get_float())))
}
#[inline]
fn load(_cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self> {
debug_assert!(matches!(ty, InterfaceType::$ty));
debug_assert!((bytes.as_ptr() as usize) % Self::SIZE32 == 0);
Ok(canonicalize($float::from_le_bytes(bytes.try_into().unwrap())))
}
}
};)*)
}
floats! {
f32/get_f32 = Float32 with abi:SCALAR4
f64/get_f64 = Float64 with abi:SCALAR8
}
unsafe impl ComponentType for bool {
type Lower = ValRaw;
const ABI: CanonicalAbiInfo = CanonicalAbiInfo::SCALAR1;
fn typecheck(ty: &InterfaceType, _types: &InstanceType<'_>) -> Result<()> {
match ty {
InterfaceType::Bool => Ok(()),
other => bail!("expected `bool` found `{}`", desc(other)),
}
}
}
unsafe impl Lower for bool {
fn lower<T>(
&self,
_cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
dst: &mut MaybeUninit<Self::Lower>,
) -> Result<()> {
debug_assert!(matches!(ty, InterfaceType::Bool));
dst.write(ValRaw::i32(*self as i32));
Ok(())
}
fn store<T>(
&self,
cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
offset: usize,
) -> Result<()> {
debug_assert!(matches!(ty, InterfaceType::Bool));
debug_assert!(offset % Self::SIZE32 == 0);
cx.get::<1>(offset)[0] = *self as u8;
Ok(())
}
}
unsafe impl Lift for bool {
#[inline]
fn lift(_cx: &mut LiftContext<'_>, ty: InterfaceType, src: &Self::Lower) -> Result<Self> {
debug_assert!(matches!(ty, InterfaceType::Bool));
match src.get_i32() {
0 => Ok(false),
_ => Ok(true),
}
}
#[inline]
fn load(_cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self> {
debug_assert!(matches!(ty, InterfaceType::Bool));
match bytes[0] {
0 => Ok(false),
_ => Ok(true),
}
}
}
unsafe impl ComponentType for char {
type Lower = ValRaw;
const ABI: CanonicalAbiInfo = CanonicalAbiInfo::SCALAR4;
fn typecheck(ty: &InterfaceType, _types: &InstanceType<'_>) -> Result<()> {
match ty {
InterfaceType::Char => Ok(()),
other => bail!("expected `char` found `{}`", desc(other)),
}
}
}
unsafe impl Lower for char {
#[inline]
fn lower<T>(
&self,
_cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
dst: &mut MaybeUninit<Self::Lower>,
) -> Result<()> {
debug_assert!(matches!(ty, InterfaceType::Char));
dst.write(ValRaw::u32(u32::from(*self)));
Ok(())
}
#[inline]
fn store<T>(
&self,
cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
offset: usize,
) -> Result<()> {
debug_assert!(matches!(ty, InterfaceType::Char));
debug_assert!(offset % Self::SIZE32 == 0);
*cx.get::<4>(offset) = u32::from(*self).to_le_bytes();
Ok(())
}
}
unsafe impl Lift for char {
#[inline]
fn lift(_cx: &mut LiftContext<'_>, ty: InterfaceType, src: &Self::Lower) -> Result<Self> {
debug_assert!(matches!(ty, InterfaceType::Char));
Ok(char::try_from(src.get_u32()).err2anyhow()?)
}
#[inline]
fn load(_cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self> {
debug_assert!(matches!(ty, InterfaceType::Char));
debug_assert!((bytes.as_ptr() as usize) % Self::SIZE32 == 0);
let bits = u32::from_le_bytes(bytes.try_into().unwrap());
Ok(char::try_from(bits).err2anyhow()?)
}
}
// TODO: these probably need different constants for memory64
const UTF16_TAG: usize = 1 << 31;
const MAX_STRING_BYTE_LENGTH: usize = (1 << 31) - 1;
// Note that this is similar to `ComponentType for WasmStr` except it can only
// be used for lowering, not lifting.
unsafe impl ComponentType for str {
type Lower = [ValRaw; 2];
const ABI: CanonicalAbiInfo = CanonicalAbiInfo::POINTER_PAIR;
fn typecheck(ty: &InterfaceType, _types: &InstanceType<'_>) -> Result<()> {
match ty {
InterfaceType::String => Ok(()),
other => bail!("expected `string` found `{}`", desc(other)),
}
}
}
unsafe impl Lower for str {
fn lower<T>(
&self,
cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
dst: &mut MaybeUninit<[ValRaw; 2]>,
) -> Result<()> {
debug_assert!(matches!(ty, InterfaceType::String));
let (ptr, len) = lower_string(cx, self)?;
// See "WRITEPTR64" above for why this is always storing a 64-bit
// integer.
map_maybe_uninit!(dst[0]).write(ValRaw::i64(ptr as i64));
map_maybe_uninit!(dst[1]).write(ValRaw::i64(len as i64));
Ok(())
}
fn store<T>(
&self,
cx: &mut LowerContext<'_, T>,
ty: InterfaceType,
offset: usize,
) -> Result<()> {
debug_assert!(matches!(ty, InterfaceType::String));
debug_assert!(offset % (Self::ALIGN32 as usize) == 0);
let (ptr, len) = lower_string(cx, self)?;
// FIXME: needs memory64 handling
*cx.get(offset + 0) = u32::try_from(ptr).unwrap().to_le_bytes();
*cx.get(offset + 4) = u32::try_from(len).unwrap().to_le_bytes();
Ok(())
}
}
fn lower_string<T>(cx: &mut LowerContext<'_, T>, string: &str) -> Result<(usize, usize)> {
// Note that in general the wasm module can't assume anything about what the
// host strings are encoded as. Additionally hosts are allowed to have
// differently-encoded strings at runtime. Finally when copying a string
// into wasm it's somewhat strict in the sense that the various patterns of
// allocation and such are already dictated for us.
//
// In general what this means is that when copying a string from the host
// into the destination we need to follow one of the cases of copying into
// WebAssembly. It doesn't particularly matter which case as long as it ends
// up in the right encoding. For example a destination encoding of
// latin1+utf16 has a number of ways to get copied into and we do something
// here that isn't the default "utf8 to latin1+utf16" since we have access
// to simd-accelerated helpers in the `encoding_rs` crate. This is ok though
// because we can fake that the host string was already stored in latin1
// format and follow that copy pattern instead.
match cx.options.string_encoding() {
// This corresponds to `store_string_copy` in the canonical ABI where
// the host's representation is utf-8 and the wasm module wants utf-8 so
// a copy is all that's needed (and the `realloc` can be precise for the
// initial memory allocation).
StringEncoding::Utf8 => {
if string.len() > MAX_STRING_BYTE_LENGTH {
bail!(
"string length of {} too large to copy into wasm",
string.len()
);
}
let ptr = cx.realloc(0, 0, 1, string.len())?;
cx.as_slice_mut()[ptr..][..string.len()].copy_from_slice(string.as_bytes());
Ok((ptr, string.len()))
}
// This corresponds to `store_utf8_to_utf16` in the canonical ABI. Here
// an over-large allocation is performed and then shrunk afterwards if
// necessary.
StringEncoding::Utf16 => {
let size = string.len() * 2;
if size > MAX_STRING_BYTE_LENGTH {
bail!(
"string length of {} too large to copy into wasm",
string.len()
);
}
let mut ptr = cx.realloc(0, 0, 2, size)?;
let mut copied = 0;
let bytes = &mut cx.as_slice_mut()[ptr..][..size];
for (u, bytes) in string.encode_utf16().zip(bytes.chunks_mut(2)) {
let u_bytes = u.to_le_bytes();
bytes[0] = u_bytes[0];
bytes[1] = u_bytes[1];
copied += 1;
}
if (copied * 2) < size {
ptr = cx.realloc(ptr, size, 2, copied * 2)?;
}
Ok((ptr, copied))
}
StringEncoding::CompactUtf16 => {
// This corresponds to `store_string_to_latin1_or_utf16`
let bytes = string.as_bytes();
let mut iter = string.char_indices();
let mut ptr = cx.realloc(0, 0, 2, bytes.len())?;
let mut dst = &mut cx.as_slice_mut()[ptr..][..bytes.len()];
let mut result = 0;
while let Some((i, ch)) = iter.next() {
// Test if this `char` fits into the latin1 encoding.
if let Ok(byte) = u8::try_from(u32::from(ch)) {
dst[result] = byte;
result += 1;
continue;
}
// .. if utf16 is forced to be used then the allocation is
// bumped up to the maximum size.
let worst_case = bytes
.len()
.checked_mul(2)
.ok_or_else(|| anyhow!("byte length overflow"))?;
if worst_case > MAX_STRING_BYTE_LENGTH {
bail!("byte length too large");
}
ptr = cx.realloc(ptr, bytes.len(), 2, worst_case)?;
dst = &mut cx.as_slice_mut()[ptr..][..worst_case];
// Previously encoded latin1 bytes are inflated to their 16-bit
// size for utf16
for i in (0..result).rev() {
dst[2 * i] = dst[i];
dst[2 * i + 1] = 0;
}
// and then the remainder of the string is encoded.
for (u, bytes) in string[i..]
.encode_utf16()
.zip(dst[2 * result..].chunks_mut(2))
{
let u_bytes = u.to_le_bytes();
bytes[0] = u_bytes[0];
bytes[1] = u_bytes[1];
result += 1;
}
if worst_case > 2 * result {
ptr = cx.realloc(ptr, worst_case, 2, 2 * result)?;
}
return Ok((ptr, result | UTF16_TAG));
}
if result < bytes.len() {
ptr = cx.realloc(ptr, bytes.len(), 2, result)?;
}
Ok((ptr, result))
}
}
}
/// Representation of a string located in linear memory in a WebAssembly
/// instance.
///
/// This type can be used in place of `String` and `str` for string-taking APIs
/// in some situations. The purpose of this type is to represent a range of
/// validated bytes within a component but does not actually copy the bytes. The
/// primary method, [`WasmStr::to_str`], attempts to return a reference to the
/// string directly located in the component's memory, avoiding a copy into the
/// host if possible.
///
/// The downside of this type, however, is that accessing a string requires a
/// [`Store`](crate::Store) pointer (via [`StoreContext`]). Bindings generated
/// by [`bindgen!`](crate::component::bindgen), for example, do not have access
/// to [`StoreContext`] and thus can't use this type.
///
/// This is intended for more advanced use cases such as defining functions
/// directly in a [`Linker`](crate::component::Linker). It's expected that in
/// the future [`bindgen!`](crate::component::bindgen) will also have a way to
/// use this type.
///
/// This type is used with [`TypedFunc`], for example, when WebAssembly returns
/// a string. This type cannot be used to give a string to WebAssembly, instead
/// `&str` should be used for that (since it's coming from the host).
///
/// Note that this type represents an in-bounds string in linear memory, but it
/// does not represent a valid string (e.g. valid utf-8). Validation happens
/// when [`WasmStr::to_str`] is called.
///
/// Also note that this type does not implement [`Lower`], it only implements
/// [`Lift`].
pub struct WasmStr {
ptr: usize,
len: usize,
options: Options,
}
impl WasmStr {
fn new(ptr: usize, len: usize, cx: &mut LiftContext<'_>) -> Result<WasmStr> {
let byte_len = match cx.options.string_encoding() {
StringEncoding::Utf8 => Some(len),
StringEncoding::Utf16 => len.checked_mul(2),
StringEncoding::CompactUtf16 => {
if len & UTF16_TAG == 0 {
Some(len)
} else {
(len ^ UTF16_TAG).checked_mul(2)
}
}
};
match byte_len.and_then(|len| ptr.checked_add(len)) {
Some(n) if n <= cx.memory().len() => {}
_ => bail!("string pointer/length out of bounds of memory"),
}
Ok(WasmStr {
ptr,
len,
options: *cx.options,
})
}
/// Returns the underlying string that this cursor points to.
///
/// Note that this will internally decode the string from the wasm's
/// encoding to utf-8 and additionally perform validation.
///
/// The `store` provided must be the store where this string lives to
/// access the correct memory.
///
/// # Errors
///
/// Returns an error if the string wasn't encoded correctly (e.g. invalid
/// utf-8).
///
/// # Panics
///
/// Panics if this string is not owned by `store`.
//
// TODO: should add accessors for specifically utf-8 and utf-16 that perhaps
// in an opt-in basis don't do validation. Additionally there should be some
// method that returns `[u16]` after validating to avoid the utf16-to-utf8
// transcode.
pub fn to_str<'a, T: 'a>(&self, store: impl Into<StoreContext<'a, T>>) -> Result<Cow<'a, str>> {
let store = store.into().0;
let memory = self.options.memory(store);
self.to_str_from_memory(memory)
}
fn to_str_from_memory<'a>(&self, memory: &'a [u8]) -> Result<Cow<'a, str>> {
match self.options.string_encoding() {
StringEncoding::Utf8 => self.decode_utf8(memory),
StringEncoding::Utf16 => self.decode_utf16(memory, self.len),
StringEncoding::CompactUtf16 => {
if self.len & UTF16_TAG == 0 {
self.decode_latin1(memory)
} else {
self.decode_utf16(memory, self.len ^ UTF16_TAG)
}
}
}
}
fn decode_utf8<'a>(&self, memory: &'a [u8]) -> Result<Cow<'a, str>> {
// Note that bounds-checking already happen in construction of `WasmStr`
// so this is never expected to panic. This could theoretically be
// unchecked indexing if we're feeling wild enough.
Ok(str::from_utf8(&memory[self.ptr..][..self.len])
.err2anyhow()?
.into())
}
fn decode_utf16<'a>(&self, memory: &'a [u8], len: usize) -> Result<Cow<'a, str>> {
// See notes in `decode_utf8` for why this is panicking indexing.
let memory = &memory[self.ptr..][..len * 2];
Ok(core::char::decode_utf16(
memory
.chunks(2)
.map(|chunk| u16::from_le_bytes(chunk.try_into().unwrap())),
)
.collect::<Result<String, _>>()
.err2anyhow()?
.into())
}
fn decode_latin1<'a>(&self, memory: &'a [u8]) -> Result<Cow<'a, str>> {
// See notes in `decode_utf8` for why this is panicking indexing.
Ok(encoding_rs::mem::decode_latin1(
&memory[self.ptr..][..self.len],
))
}
}
// Note that this is similar to `ComponentType for str` except it can only be
// used for lifting, not lowering.
unsafe impl ComponentType for WasmStr {
type Lower = <str as ComponentType>::Lower;
const ABI: CanonicalAbiInfo = CanonicalAbiInfo::POINTER_PAIR;
fn typecheck(ty: &InterfaceType, _types: &InstanceType<'_>) -> Result<()> {
match ty {
InterfaceType::String => Ok(()),
other => bail!("expected `string` found `{}`", desc(other)),
}
}
}
unsafe impl Lift for WasmStr {
#[inline]
fn lift(cx: &mut LiftContext<'_>, ty: InterfaceType, src: &Self::Lower) -> Result<Self> {
debug_assert!(matches!(ty, InterfaceType::String));
// FIXME: needs memory64 treatment
let ptr = src[0].get_u32();
let len = src[1].get_u32();
let (ptr, len) = (
usize::try_from(ptr).err2anyhow()?,
usize::try_from(len).err2anyhow()?,
);
WasmStr::new(ptr, len, cx)
}
#[inline]
fn load(cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self> {
debug_assert!(matches!(ty, InterfaceType::String));
debug_assert!((bytes.as_ptr() as usize) % (Self::ALIGN32 as usize) == 0);
// FIXME: needs memory64 treatment
let ptr = u32::from_le_bytes(bytes[..4].try_into().unwrap());
let len = u32::from_le_bytes(bytes[4..].try_into().unwrap());
let (ptr, len) = (
usize::try_from(ptr).err2anyhow()?,
usize::try_from(len).err2anyhow()?,
);
WasmStr::new(ptr, len, cx)
}
}
unsafe impl<T> ComponentType for [T]
where
T: ComponentType,
{
type Lower = [ValRaw; 2];
const ABI: CanonicalAbiInfo = CanonicalAbiInfo::POINTER_PAIR;
fn typecheck(ty: &InterfaceType, types: &InstanceType<'_>) -> Result<()> {
match ty {
InterfaceType::List(t) => T::typecheck(&types.types[*t].element, types),
other => bail!("expected `list` found `{}`", desc(other)),
}
}
}
unsafe impl<T> Lower for [T]
where
T: Lower,
{
fn lower<U>(
&self,
cx: &mut LowerContext<'_, U>,
ty: InterfaceType,
dst: &mut MaybeUninit<[ValRaw; 2]>,
) -> Result<()> {
let elem = match ty {
InterfaceType::List(i) => cx.types[i].element,
_ => bad_type_info(),
};
let (ptr, len) = lower_list(cx, elem, self)?;
// See "WRITEPTR64" above for why this is always storing a 64-bit
// integer.
map_maybe_uninit!(dst[0]).write(ValRaw::i64(ptr as i64));
map_maybe_uninit!(dst[1]).write(ValRaw::i64(len as i64));
Ok(())
}
fn store<U>(
&self,
cx: &mut LowerContext<'_, U>,
ty: InterfaceType,
offset: usize,
) -> Result<()> {
let elem = match ty {
InterfaceType::List(i) => cx.types[i].element,
_ => bad_type_info(),
};
debug_assert!(offset % (Self::ALIGN32 as usize) == 0);
let (ptr, len) = lower_list(cx, elem, self)?;
*cx.get(offset + 0) = u32::try_from(ptr).unwrap().to_le_bytes();
*cx.get(offset + 4) = u32::try_from(len).unwrap().to_le_bytes();
Ok(())
}
}
// FIXME: this is not a memcpy for `T` where `T` is something like `u8`.
//
// Some attempts to fix this have proved not fruitful. In isolation an attempt
// was made where:
//
// * `MemoryMut` stored a `*mut [u8]` as its "last view" of memory to avoid
// reloading the base pointer constantly. This view is reset on `realloc`.
// * The bounds-checks in `MemoryMut::get` were removed (replaced with unsafe
// indexing)
//
// Even then though this didn't correctly vectorized for `Vec<u8>`. It's not
// entirely clear why but it appeared that it's related to reloading the base
// pointer to memory (I guess from `MemoryMut` itself?). Overall I'm not really
// clear on what's happening there, but this is surely going to be a performance
// bottleneck in the future.
fn lower_list<T, U>(
cx: &mut LowerContext<'_, U>,
ty: InterfaceType,
list: &[T],
) -> Result<(usize, usize)>
where
T: Lower,
{
let elem_size = T::SIZE32;
let size = list
.len()
.checked_mul(elem_size)
.ok_or_else(|| anyhow!("size overflow copying a list"))?;
let ptr = cx.realloc(0, 0, T::ALIGN32, size)?;
T::store_list(cx, ty, ptr, list)?;
Ok((ptr, list.len()))
}
/// Representation of a list of values that are owned by a WebAssembly instance.
///
/// For some more commentary about the rationale for this type see the
/// documentation of [`WasmStr`]. In summary this type can avoid a copy when
/// passing data to the host in some situations but is additionally more
/// cumbersome to use by requiring a [`Store`](crate::Store) to be provided.
///
/// This type is used whenever a `(list T)` is returned from a [`TypedFunc`],
/// for example. This type represents a list of values that are stored in linear
/// memory which are waiting to be read.
///
/// Note that this type represents only a valid range of bytes for the list
/// itself, it does not represent validity of the elements themselves and that's
/// performed when they're iterated.
///
/// Note that this type does not implement the [`Lower`] trait, only [`Lift`].
pub struct WasmList<T> {
ptr: usize,
len: usize,
options: Options,
elem: InterfaceType,
// NB: it would probably be more efficient to store a non-atomic index-style
// reference to something inside a `StoreOpaque`, but that's not easily
// available at this time, so it's left as a future exercise.
types: Arc<ComponentTypes>,
instance: SendSyncPtr<ComponentInstance>,
_marker: marker::PhantomData<T>,
}
impl<T: Lift> WasmList<T> {
fn new(
ptr: usize,
len: usize,
cx: &mut LiftContext<'_>,
elem: InterfaceType,
) -> Result<WasmList<T>> {
match len
.checked_mul(T::SIZE32)
.and_then(|len| ptr.checked_add(len))
{
Some(n) if n <= cx.memory().len() => {}
_ => bail!("list pointer/length out of bounds of memory"),
}
if ptr % usize::try_from(T::ALIGN32).err2anyhow()? != 0 {
bail!("list pointer is not aligned")
}
Ok(WasmList {
ptr,
len,
options: *cx.options,
elem,
types: cx.types.clone(),
instance: SendSyncPtr::new(NonNull::new(cx.instance_ptr()).unwrap()),
_marker: marker::PhantomData,
})
}
/// Returns the item length of this vector
#[inline]
pub fn len(&self) -> usize {
self.len
}
/// Gets the `n`th element of this list.
///
/// Returns `None` if `index` is out of bounds. Returns `Some(Err(..))` if
/// the value couldn't be decoded (it was invalid). Returns `Some(Ok(..))`
/// if the value is valid.
///
/// # Panics
///
/// This function will panic if the string did not originally come from the
/// `store` specified.
//
// TODO: given that interface values are intended to be consumed in one go
// should we even expose a random access iteration API? In theory all
// consumers should be validating through the iterator.
pub fn get(&self, mut store: impl AsContextMut, index: usize) -> Option<Result<T>> {
let store = store.as_context_mut().0;
self.options.store_id().assert_belongs_to(store.id());
// This should be safe because the unsafety lies in the `self.instance`
// pointer passed in has previously been validated by the lifting
// context this was originally created within and with the check above
// this is guaranteed to be the same store. This means that this should
// be carrying over the original assertion from the original creation of
// the lifting context that created this type.
let mut cx =
unsafe { LiftContext::new(store, &self.options, &self.types, self.instance.as_ptr()) };
self.get_from_store(&mut cx, index)
}
fn get_from_store(&self, cx: &mut LiftContext<'_>, index: usize) -> Option<Result<T>> {
if index >= self.len {
return None;
}
// Note that this is using panicking indexing and this is expected to
// never fail. The bounds-checking here happened during the construction
// of the `WasmList` itself which means these should always be in-bounds
// (and wasm memory can only grow). This could theoretically be
// unchecked indexing if we're confident enough and it's actually a perf
// issue one day.
let bytes = &cx.memory()[self.ptr + index * T::SIZE32..][..T::SIZE32];
Some(T::load(cx, self.elem, bytes))
}
/// Returns an iterator over the elements of this list.
///
/// Each item of the list may fail to decode and is represented through the
/// `Result` value of the iterator.
pub fn iter<'a, U: 'a>(
&'a self,
store: impl Into<StoreContextMut<'a, U>>,
) -> impl ExactSizeIterator<Item = Result<T>> + 'a {
let store = store.into().0;
self.options.store_id().assert_belongs_to(store.id());
// See comments about unsafety in the `get` method.
let mut cx =
unsafe { LiftContext::new(store, &self.options, &self.types, self.instance.as_ptr()) };
(0..self.len).map(move |i| self.get_from_store(&mut cx, i).unwrap())
}
}
macro_rules! raw_wasm_list_accessors {
($($i:ident)*) => ($(
impl WasmList<$i> {
/// Get access to the raw underlying memory for this list.
///
/// This method will return a direct slice into the original wasm
/// module's linear memory where the data for this slice is stored.
/// This allows the embedder to have efficient access to the
/// underlying memory if needed and avoid copies and such if
/// desired.
///
/// Note that multi-byte integers are stored in little-endian format
/// so portable processing of this slice must be aware of the host's
/// byte-endianness. The `from_le` constructors in the Rust standard
/// library should be suitable for converting from little-endian.
///
/// # Panics
///
/// Panics if the `store` provided is not the one from which this
/// slice originated.
pub fn as_le_slice<'a, T: 'a>(&self, store: impl Into<StoreContext<'a, T>>) -> &'a [$i] {
let memory = self.options.memory(store.into().0);
self._as_le_slice(memory)
}
fn _as_le_slice<'a>(&self, all_of_memory: &'a [u8]) -> &'a [$i] {
// See comments in `WasmList::get` for the panicking indexing
let byte_size = self.len * mem::size_of::<$i>();
let bytes = &all_of_memory[self.ptr..][..byte_size];
// The canonical ABI requires that everything is aligned to its
// own size, so this should be an aligned array. Furthermore the
// alignment of primitive integers for hosts should be smaller
// than or equal to the size of the primitive itself, meaning
// that a wasm canonical-abi-aligned list is also aligned for
// the host. That should mean that the head/tail slices here are
// empty.
//
// Also note that the `unsafe` here is needed since the type
// we're aligning to isn't guaranteed to be valid, but in our
// case it's just integers and bytes so this should be safe.
unsafe {
let (head, body, tail) = bytes.align_to::<$i>();
assert!(head.is_empty() && tail.is_empty());
body
}
}
}
)*)
}
raw_wasm_list_accessors! {
i8 i16 i32 i64
u8 u16 u32 u64
}
// Note that this is similar to `ComponentType for str` except it can only be
// used for lifting, not lowering.
unsafe impl<T: ComponentType> ComponentType for WasmList<T> {
type Lower = <[T] as ComponentType>::Lower;
const ABI: CanonicalAbiInfo = CanonicalAbiInfo::POINTER_PAIR;
fn typecheck(ty: &InterfaceType, types: &InstanceType<'_>) -> Result<()> {
<[T] as ComponentType>::typecheck(ty, types)
}
}
unsafe impl<T: Lift> Lift for WasmList<T> {
fn lift(cx: &mut LiftContext<'_>, ty: InterfaceType, src: &Self::Lower) -> Result<Self> {
let elem = match ty {
InterfaceType::List(i) => cx.types[i].element,
_ => bad_type_info(),
};
// FIXME: needs memory64 treatment
let ptr = src[0].get_u32();
let len = src[1].get_u32();
let (ptr, len) = (
usize::try_from(ptr).err2anyhow()?,
usize::try_from(len).err2anyhow()?,
);
WasmList::new(ptr, len, cx, elem)
}
fn load(cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self> {
let elem = match ty {
InterfaceType::List(i) => cx.types[i].element,
_ => bad_type_info(),
};
debug_assert!((bytes.as_ptr() as usize) % (Self::ALIGN32 as usize) == 0);
// FIXME: needs memory64 treatment
let ptr = u32::from_le_bytes(bytes[..4].try_into().unwrap());
let len = u32::from_le_bytes(bytes[4..].try_into().unwrap());
let (ptr, len) = (
usize::try_from(ptr).err2anyhow()?,
usize::try_from(len).err2anyhow()?,
);
WasmList::new(ptr, len, cx, elem)
}
}
/// Verify that the given wasm type is a tuple with the expected fields in the right order.
fn typecheck_tuple(
ty: &InterfaceType,
types: &InstanceType<'_>,
expected: &[fn(&InterfaceType, &InstanceType<'_>) -> Result<()>],
) -> Result<()> {
match ty {
InterfaceType::Tuple(t) => {
let tuple = &types.types[*t];
if tuple.types.len() != expected.len() {
bail!(
"expected {}-tuple, found {}-tuple",
expected.len(),
tuple.types.len()
);
}
for (ty, check) in tuple.types.iter().zip(expected) {
check(ty, types)?;
}
Ok(())
}
other => bail!("expected `tuple` found `{}`", desc(other)),
}
}
/// Verify that the given wasm type is a record with the expected fields in the right order and with the right
/// names.
pub fn typecheck_record(
ty: &InterfaceType,
types: &InstanceType<'_>,
expected: &[(&str, fn(&InterfaceType, &InstanceType<'_>) -> Result<()>)],
) -> Result<()> {
match ty {
InterfaceType::Record(index) => {
let fields = &types.types[*index].fields;
if fields.len() != expected.len() {
bail!(
"expected record of {} fields, found {} fields",
expected.len(),
fields.len()
);
}
for (field, &(name, check)) in fields.iter().zip(expected) {
check(&field.ty, types)
.with_context(|| format!("type mismatch for field {name}"))?;
if field.name != name {
bail!("expected record field named {}, found {}", name, field.name);
}
}
Ok(())
}
other => bail!("expected `record` found `{}`", desc(other)),
}
}
/// Verify that the given wasm type is a variant with the expected cases in the right order and with the right
/// names.
pub fn typecheck_variant(
ty: &InterfaceType,
types: &InstanceType<'_>,
expected: &[(
&str,
Option<fn(&InterfaceType, &InstanceType<'_>) -> Result<()>>,
)],
) -> Result<()> {
match ty {
InterfaceType::Variant(index) => {
let cases = &types.types[*index].cases;
if cases.len() != expected.len() {
bail!(
"expected variant of {} cases, found {} cases",
expected.len(),
cases.len()
);
}
for ((case_name, case_ty), &(name, check)) in cases.iter().zip(expected) {
if *case_name != name {
bail!("expected variant case named {name}, found {case_name}");
}
match (check, case_ty) {
(Some(check), Some(ty)) => check(ty, types)
.with_context(|| format!("type mismatch for case {name}"))?,
(None, None) => {}
(Some(_), None) => {
bail!("case `{name}` has no type but one was expected")
}
(None, Some(_)) => {
bail!("case `{name}` has a type but none was expected")
}
}
}
Ok(())
}
other => bail!("expected `variant` found `{}`", desc(other)),
}
}
/// Verify that the given wasm type is a enum with the expected cases in the right order and with the right
/// names.
pub fn typecheck_enum(
ty: &InterfaceType,
types: &InstanceType<'_>,
expected: &[&str],
) -> Result<()> {
match ty {
InterfaceType::Enum(index) => {
let names = &types.types[*index].names;
if names.len() != expected.len() {
bail!(
"expected enum of {} names, found {} names",
expected.len(),
names.len()
);
}
for (name, expected) in names.iter().zip(expected) {
if name != expected {
bail!("expected enum case named {}, found {}", expected, name);
}
}
Ok(())
}
other => bail!("expected `enum` found `{}`", desc(other)),
}
}
/// Verify that the given wasm type is a flags type with the expected flags in the right order and with the right
/// names.
pub fn typecheck_flags(
ty: &InterfaceType,
types: &InstanceType<'_>,
expected: &[&str],
) -> Result<()> {
match ty {
InterfaceType::Flags(index) => {
let names = &types.types[*index].names;
if names.len() != expected.len() {
bail!(
"expected flags type with {} names, found {} names",
expected.len(),
names.len()
);
}
for (name, expected) in names.iter().zip(expected) {
if name != expected {
bail!("expected flag named {}, found {}", expected, name);
}
}
Ok(())
}
other => bail!("expected `flags` found `{}`", desc(other)),
}
}
/// Format the specified bitflags using the specified names for debugging
pub fn format_flags(bits: &[u32], names: &[&str], f: &mut fmt::Formatter) -> fmt::Result {
f.write_str("(")?;
let mut wrote = false;
for (index, name) in names.iter().enumerate() {
if ((bits[index / 32] >> (index % 32)) & 1) != 0 {
if wrote {
f.write_str("|")?;
} else {
wrote = true;
}
f.write_str(name)?;
}
}
f.write_str(")")
}
unsafe impl<T> ComponentType for Option<T>
where
T: ComponentType,
{
type Lower = TupleLower2<<u32 as ComponentType>::Lower, T::Lower>;
const ABI: CanonicalAbiInfo = CanonicalAbiInfo::variant_static(&[None, Some(T::ABI)]);
fn typecheck(ty: &InterfaceType, types: &InstanceType<'_>) -> Result<()> {
match ty {
InterfaceType::Option(t) => T::typecheck(&types.types[*t].ty, types),
other => bail!("expected `option` found `{}`", desc(other)),
}
}
}
unsafe impl<T> ComponentVariant for Option<T>
where
T: ComponentType,
{
const CASES: &'static [Option<CanonicalAbiInfo>] = &[None, Some(T::ABI)];
}
unsafe impl<T> Lower for Option<T>
where
T: Lower,
{
fn lower<U>(
&self,
cx: &mut LowerContext<'_, U>,
ty: InterfaceType,
dst: &mut MaybeUninit<Self::Lower>,
) -> Result<()> {
let payload = match ty {
InterfaceType::Option(ty) => cx.types[ty].ty,
_ => bad_type_info(),
};
match self {
None => {
map_maybe_uninit!(dst.A1).write(ValRaw::i32(0));
// Note that this is unsafe as we're writing an arbitrary
// bit-pattern to an arbitrary type, but part of the unsafe
// contract of the `ComponentType` trait is that we can assign
// any bit-pattern. By writing all zeros here we're ensuring
// that the core wasm arguments this translates to will all be
// zeros (as the canonical ABI requires).
unsafe {
map_maybe_uninit!(dst.A2).as_mut_ptr().write_bytes(0u8, 1);
}
}
Some(val) => {
map_maybe_uninit!(dst.A1).write(ValRaw::i32(1));
val.lower(cx, payload, map_maybe_uninit!(dst.A2))?;
}
}
Ok(())
}
fn store<U>(
&self,
cx: &mut LowerContext<'_, U>,
ty: InterfaceType,
offset: usize,
) -> Result<()> {
debug_assert!(offset % (Self::ALIGN32 as usize) == 0);
let payload = match ty {
InterfaceType::Option(ty) => cx.types[ty].ty,
_ => bad_type_info(),
};
match self {
None => {
cx.get::<1>(offset)[0] = 0;
}
Some(val) => {
cx.get::<1>(offset)[0] = 1;
val.store(cx, payload, offset + (Self::INFO.payload_offset32 as usize))?;
}
}
Ok(())
}
}
unsafe impl<T> Lift for Option<T>
where
T: Lift,
{
fn lift(cx: &mut LiftContext<'_>, ty: InterfaceType, src: &Self::Lower) -> Result<Self> {
let payload = match ty {
InterfaceType::Option(ty) => cx.types[ty].ty,
_ => bad_type_info(),
};
Ok(match src.A1.get_i32() {
0 => None,
1 => Some(T::lift(cx, payload, &src.A2)?),
_ => bail!("invalid option discriminant"),
})
}
fn load(cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self> {
debug_assert!((bytes.as_ptr() as usize) % (Self::ALIGN32 as usize) == 0);
let payload_ty = match ty {
InterfaceType::Option(ty) => cx.types[ty].ty,
_ => bad_type_info(),
};
let discrim = bytes[0];
let payload = &bytes[Self::INFO.payload_offset32 as usize..];
match discrim {
0 => Ok(None),
1 => Ok(Some(T::load(cx, payload_ty, payload)?)),
_ => bail!("invalid option discriminant"),
}
}
}
#[derive(Clone, Copy)]
#[repr(C)]
pub struct ResultLower<T: Copy, E: Copy> {
tag: ValRaw,
payload: ResultLowerPayload<T, E>,
}
#[derive(Clone, Copy)]
#[repr(C)]
union ResultLowerPayload<T: Copy, E: Copy> {
ok: T,
err: E,
}
unsafe impl<T, E> ComponentType for Result<T, E>
where
T: ComponentType,
E: ComponentType,
{
type Lower = ResultLower<T::Lower, E::Lower>;
const ABI: CanonicalAbiInfo = CanonicalAbiInfo::variant_static(&[Some(T::ABI), Some(E::ABI)]);
fn typecheck(ty: &InterfaceType, types: &InstanceType<'_>) -> Result<()> {
match ty {
InterfaceType::Result(r) => {
let result = &types.types[*r];
match &result.ok {
Some(ty) => T::typecheck(ty, types)?,
None if T::IS_RUST_UNIT_TYPE => {}
None => bail!("expected no `ok` type"),
}
match &result.err {
Some(ty) => E::typecheck(ty, types)?,
None if E::IS_RUST_UNIT_TYPE => {}
None => bail!("expected no `err` type"),
}
Ok(())
}
other => bail!("expected `result` found `{}`", desc(other)),
}
}
}
/// Lowers the payload of a variant into the storage for the entire payload,
/// handling writing zeros at the end of the representation if this payload is
/// smaller than the entire flat representation.
///
/// * `payload` - the flat storage space for the entire payload of the variant
/// * `typed_payload` - projection from the payload storage space to the
/// individual storage space for this variant.
/// * `lower` - lowering operation used to initialize the `typed_payload` return
/// value.
///
/// For more information on this se the comments in the `Lower for Result`
/// implementation below.
pub unsafe fn lower_payload<P, T>(
payload: &mut MaybeUninit<P>,
typed_payload: impl FnOnce(&mut MaybeUninit<P>) -> &mut MaybeUninit<T>,
lower: impl FnOnce(&mut MaybeUninit<T>) -> Result<()>,
) -> Result<()> {
let typed = typed_payload(payload);
lower(typed)?;
let typed_len = storage_as_slice(typed).len();
let payload = storage_as_slice_mut(payload);
for slot in payload[typed_len..].iter_mut() {
*slot = ValRaw::u64(0);
}
Ok(())
}
unsafe impl<T, E> ComponentVariant for Result<T, E>
where
T: ComponentType,
E: ComponentType,
{
const CASES: &'static [Option<CanonicalAbiInfo>] = &[Some(T::ABI), Some(E::ABI)];
}
unsafe impl<T, E> Lower for Result<T, E>
where
T: Lower,
E: Lower,
{
fn lower<U>(
&self,
cx: &mut LowerContext<'_, U>,
ty: InterfaceType,
dst: &mut MaybeUninit<Self::Lower>,
) -> Result<()> {
let (ok, err) = match ty {
InterfaceType::Result(ty) => {
let ty = &cx.types[ty];
(ty.ok, ty.err)
}
_ => bad_type_info(),
};
// This implementation of `Lower::lower`, if you're reading these from
// the top of this file, is the first location that the "join" logic of
// the component model's canonical ABI encountered. The rough problem is
// that let's say we have a component model type of the form:
//
// (result u64 (error (tuple f32 u16)))
//
// The flat representation of this is actually pretty tricky. Currently
// it is:
//
// i32 i64 i32
//
// The first `i32` is the discriminant for the `result`, and the payload
// is represented by `i64 i32`. The "ok" variant will only use the `i64`
// and the "err" variant will use both `i64` and `i32`.
//
// In the "ok" variant the first issue is encountered. The size of one
// variant may not match the size of the other variants. All variants
// start at the "front" but when lowering a type we need to be sure to
// initialize the later variants (lest we leak random host memory into
// the guest module). Due to how the `Lower` type is represented as a
// `union` of all the variants what ends up happening here is that
// internally within the `lower_payload` after the typed payload is
// lowered the remaining bits of the payload that weren't initialized
// are all set to zero. This will guarantee that we'll write to all the
// slots for each variant.
//
// The "err" variant encounters the second issue, however, which is that
// the flat representation for each type may differ between payloads. In
// the "ok" arm an `i64` is written, but the `lower` implementation for
// the "err" arm will write an `f32` and then an `i32`. For this
// implementation of `lower` to be valid the `f32` needs to get inflated
// to an `i64` with zero-padding in the upper bits. What may be
// surprising, however, is that none of this is handled in this file.
// This implementation looks like it's blindly deferring to `E::lower`
// and hoping it does the right thing.
//
// In reality, however, the correctness of variant lowering relies on
// two subtle details of the `ValRaw` implementation in Wasmtime:
//
// 1. First the `ValRaw` value always contains little-endian values.
// This means that if a `u32` is written, a `u64` is read, and then
// the `u64` has its upper bits truncated the original value will
// always be retained. This is primarily here for big-endian
// platforms where if it weren't little endian then the opposite
// would occur and the wrong value would be read.
//
// 2. Second, and perhaps even more subtly, the `ValRaw` constructors
// for 32-bit types actually always initialize 64-bits of the
// `ValRaw`. In the component model flat ABI only 32 and 64-bit types
// are used so 64-bits is big enough to contain everything. This
// means that when a `ValRaw` is written into the destination it will
// always, whether it's needed or not, be "ready" to get extended up
// to 64-bits.
//
// Put together these two subtle guarantees means that all `Lower`
// implementations can be written "naturally" as one might naively
// expect. Variants will, on each arm, zero out remaining fields and all
// writes to the flat representation will automatically be 64-bit writes
// meaning that if the value is read as a 64-bit value, which isn't
// known at the time of the write, it'll still be correct.
match self {
Ok(e) => {
map_maybe_uninit!(dst.tag).write(ValRaw::i32(0));
unsafe {
lower_payload(
map_maybe_uninit!(dst.payload),
|payload| map_maybe_uninit!(payload.ok),
|dst| match ok {
Some(ok) => e.lower(cx, ok, dst),
None => Ok(()),
},
)
}
}
Err(e) => {
map_maybe_uninit!(dst.tag).write(ValRaw::i32(1));
unsafe {
lower_payload(
map_maybe_uninit!(dst.payload),
|payload| map_maybe_uninit!(payload.err),
|dst| match err {
Some(err) => e.lower(cx, err, dst),
None => Ok(()),
},
)
}
}
}
}
fn store<U>(
&self,
cx: &mut LowerContext<'_, U>,
ty: InterfaceType,
offset: usize,
) -> Result<()> {
let (ok, err) = match ty {
InterfaceType::Result(ty) => {
let ty = &cx.types[ty];
(ty.ok, ty.err)
}
_ => bad_type_info(),
};
debug_assert!(offset % (Self::ALIGN32 as usize) == 0);
let payload_offset = Self::INFO.payload_offset32 as usize;
match self {
Ok(e) => {
cx.get::<1>(offset)[0] = 0;
if let Some(ok) = ok {
e.store(cx, ok, offset + payload_offset)?;
}
}
Err(e) => {
cx.get::<1>(offset)[0] = 1;
if let Some(err) = err {
e.store(cx, err, offset + payload_offset)?;
}
}
}
Ok(())
}
}
unsafe impl<T, E> Lift for Result<T, E>
where
T: Lift,
E: Lift,
{
#[inline]
fn lift(cx: &mut LiftContext<'_>, ty: InterfaceType, src: &Self::Lower) -> Result<Self> {
let (ok, err) = match ty {
InterfaceType::Result(ty) => {
let ty = &cx.types[ty];
(ty.ok, ty.err)
}
_ => bad_type_info(),
};
// Note that this implementation specifically isn't trying to actually
// reinterpret or alter the bits of `lower` depending on which variant
// we're lifting. This ends up all working out because the value is
// stored in little-endian format.
//
// When stored in little-endian format the `{T,E}::Lower`, when each
// individual `ValRaw` is read, means that if an i64 value, extended
// from an i32 value, was stored then when the i32 value is read it'll
// automatically ignore the upper bits.
//
// This "trick" allows us to seamlessly pass through the `Self::Lower`
// representation into the lifting/lowering without trying to handle
// "join"ed types as per the canonical ABI. It just so happens that i64
// bits will naturally be reinterpreted as f64. Additionally if the
// joined type is i64 but only the lower bits are read that's ok and we
// don't need to validate the upper bits.
//
// This is largely enabled by WebAssembly/component-model#35 where no
// validation needs to be performed for ignored bits and bytes here.
Ok(match src.tag.get_i32() {
0 => Ok(unsafe { lift_option(cx, ok, &src.payload.ok)? }),
1 => Err(unsafe { lift_option(cx, err, &src.payload.err)? }),
_ => bail!("invalid expected discriminant"),
})
}
#[inline]
fn load(cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self> {
debug_assert!((bytes.as_ptr() as usize) % (Self::ALIGN32 as usize) == 0);
let discrim = bytes[0];
let payload = &bytes[Self::INFO.payload_offset32 as usize..];
let (ok, err) = match ty {
InterfaceType::Result(ty) => {
let ty = &cx.types[ty];
(ty.ok, ty.err)
}
_ => bad_type_info(),
};
match discrim {
0 => Ok(Ok(load_option(cx, ok, &payload[..T::SIZE32])?)),
1 => Ok(Err(load_option(cx, err, &payload[..E::SIZE32])?)),
_ => bail!("invalid expected discriminant"),
}
}
}
fn lift_option<T>(cx: &mut LiftContext<'_>, ty: Option<InterfaceType>, src: &T::Lower) -> Result<T>
where
T: Lift,
{
match ty {
Some(ty) => T::lift(cx, ty, src),
None => Ok(empty_lift()),
}
}
fn load_option<T>(cx: &mut LiftContext<'_>, ty: Option<InterfaceType>, bytes: &[u8]) -> Result<T>
where
T: Lift,
{
match ty {
Some(ty) => T::load(cx, ty, bytes),
None => Ok(empty_lift()),
}
}
fn empty_lift<T>() -> T
where
T: Lift,
{
assert!(T::IS_RUST_UNIT_TYPE);
assert_eq!(mem::size_of::<T>(), 0);
unsafe { MaybeUninit::uninit().assume_init() }
}
macro_rules! impl_component_ty_for_tuples {
($n:tt $($t:ident)*) => {paste::paste!{
#[allow(non_snake_case)]
#[doc(hidden)]
#[derive(Clone, Copy)]
#[repr(C)]
pub struct [<TupleLower$n>]<$($t),*> {
$($t: $t,)*
_align_tuple_lower0_correctly: [ValRaw; 0],
}
#[allow(non_snake_case)]
unsafe impl<$($t,)*> ComponentType for ($($t,)*)
where $($t: ComponentType),*
{
type Lower = [<TupleLower$n>]<$($t::Lower),*>;
const ABI: CanonicalAbiInfo = CanonicalAbiInfo::record_static(&[
$($t::ABI),*
]);
const IS_RUST_UNIT_TYPE: bool = {
let mut _is_unit = true;
$(
let _anything_to_bind_the_macro_variable = $t::IS_RUST_UNIT_TYPE;
_is_unit = false;
)*
_is_unit
};
fn typecheck(
ty: &InterfaceType,
types: &InstanceType<'_>,
) -> Result<()> {
typecheck_tuple(ty, types, &[$($t::typecheck),*])
}
}
#[allow(non_snake_case)]
unsafe impl<$($t,)*> Lower for ($($t,)*)
where $($t: Lower),*
{
fn lower<U>(
&self,
cx: &mut LowerContext<'_, U>,
ty: InterfaceType,
_dst: &mut MaybeUninit<Self::Lower>,
) -> Result<()> {
let types = match ty {
InterfaceType::Tuple(t) => &cx.types[t].types,
_ => bad_type_info(),
};
let ($($t,)*) = self;
let mut _types = types.iter();
$(
let ty = *_types.next().unwrap_or_else(bad_type_info);
$t.lower(cx, ty, map_maybe_uninit!(_dst.$t))?;
)*
Ok(())
}
fn store<U>(
&self,
cx: &mut LowerContext<'_, U>,
ty: InterfaceType,
mut _offset: usize,
) -> Result<()> {
debug_assert!(_offset % (Self::ALIGN32 as usize) == 0);
let types = match ty {
InterfaceType::Tuple(t) => &cx.types[t].types,
_ => bad_type_info(),
};
let ($($t,)*) = self;
let mut _types = types.iter();
$(
let ty = *_types.next().unwrap_or_else(bad_type_info);
$t.store(cx, ty, $t::ABI.next_field32_size(&mut _offset))?;
)*
Ok(())
}
}
#[allow(non_snake_case)]
unsafe impl<$($t,)*> Lift for ($($t,)*)
where $($t: Lift),*
{
#[inline]
fn lift(cx: &mut LiftContext<'_>, ty: InterfaceType, _src: &Self::Lower) -> Result<Self> {
let types = match ty {
InterfaceType::Tuple(t) => &cx.types[t].types,
_ => bad_type_info(),
};
let mut _types = types.iter();
Ok(($(
$t::lift(
cx,
*_types.next().unwrap_or_else(bad_type_info),
&_src.$t,
)?,
)*))
}
#[inline]
fn load(cx: &mut LiftContext<'_>, ty: InterfaceType, bytes: &[u8]) -> Result<Self> {
debug_assert!((bytes.as_ptr() as usize) % (Self::ALIGN32 as usize) == 0);
let types = match ty {
InterfaceType::Tuple(t) => &cx.types[t].types,
_ => bad_type_info(),
};
let mut _types = types.iter();
let mut _offset = 0;
$(
let ty = *_types.next().unwrap_or_else(bad_type_info);
let $t = $t::load(cx, ty, &bytes[$t::ABI.next_field32_size(&mut _offset)..][..$t::SIZE32])?;
)*
Ok(($($t,)*))
}
}
#[allow(non_snake_case)]
unsafe impl<$($t,)*> ComponentNamedList for ($($t,)*)
where $($t: ComponentType),*
{}
}};
}
for_each_function_signature!(impl_component_ty_for_tuples);
pub fn desc(ty: &InterfaceType) -> &'static str {
match ty {
InterfaceType::U8 => "u8",
InterfaceType::S8 => "s8",
InterfaceType::U16 => "u16",
InterfaceType::S16 => "s16",
InterfaceType::U32 => "u32",
InterfaceType::S32 => "s32",
InterfaceType::U64 => "u64",
InterfaceType::S64 => "s64",
InterfaceType::Float32 => "f32",
InterfaceType::Float64 => "f64",
InterfaceType::Bool => "bool",
InterfaceType::Char => "char",
InterfaceType::String => "string",
InterfaceType::List(_) => "list",
InterfaceType::Tuple(_) => "tuple",
InterfaceType::Option(_) => "option",
InterfaceType::Result(_) => "result",
InterfaceType::Record(_) => "record",
InterfaceType::Variant(_) => "variant",
InterfaceType::Flags(_) => "flags",
InterfaceType::Enum(_) => "enum",
InterfaceType::Own(_) => "owned resource",
InterfaceType::Borrow(_) => "borrowed resource",
}
}
#[cold]
#[doc(hidden)]
pub fn bad_type_info<T>() -> T {
// NB: should consider something like `unreachable_unchecked` here if this
// becomes a performance bottleneck at some point, but that also comes with
// a tradeoff of propagating a lot of unsafety, so it may not be worth it.
panic!("bad type information detected");
}