WIP: add a float32-only device#206
Conversation
8f9dd47 to
5cd3619
Compare
|
I had a quick look at the diff, I think this is going in a direction I like. One thought to which I have no good answer: would it make sense to repurpose Interested what other think about this. |
|
I am fine with repurposing |
|
I think it would make sense for the float32 only device to raise an exception on Note that this is not the case for PyTorch on the MPS devices for some reason: >>> import torch
>>> a = torch.ones(10, device="mps")
>>> b = 2 * torch.ones(10) # CPU
>>> a + b
Traceback (most recent call last):
Cell In[9], line 1
a + b
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, mps:0 and cpu!
>>> b[:5] = a[:5] # implicit MPS to CPU transfer does not raise!
>>> b
tensor([1., 1., 1., 1., 1., 2., 2., 2., 2., 2.])I would have expected the last statement to also raise EDIT: actually, this is not related to dtypes. Maybe I should open a dedicated issue for this. EDIT2: done: #207 |
Add a new device which only supports single precision floats and does not support double precision floats.
The new device mimics torch "mps" device in that it does not have f64 but supports int64---unlike JAX which either has both or none.
dtypes,default_dtypes) device-aware;ones,emptyetc) use the device-specific default dtype when givendtype=None, device=f32_deviceTODO:
fft.{fftfreq, rfftfreq}device=arguments in internal constructionsTBD:
device2or add a newF32_device(if so, bikeshed the name)dtype=float64, device=f32_only_deviceshould raise? torch "mps" tensors raise a TypeError, follow it or mandate a ValueError?Intends to close gh-64,
Gives a way to close gh-38 --- if we have a f32-only device, we probably do not need a global flag
Addresses a large part of gh-70
Cross-ref the spec RFC to allow for missing dtypes , data-apis/array-api#998 --- note that this
array-api-strictPR can only land after the spec is updated;Also cross-ref the test suite tracker data-apis/array-api-tests#431: the test suite is actually fairly far along.