aboutsummaryrefslogtreecommitdiff
path: root/lib/swiotlb.c
AgeCommit message (Collapse)AuthorFilesLines
2009-04-08swiotlb: map_page fix for highmem systemsBecky Bruce1-2/+1
The current code calls virt_to_phys() on address that might be in highmem, which is bad. This wasn't needed, anyway, because we already have the physical address we need. Get rid of the now-unused virtual address as well. Signed-off-by: Becky Bruce <[email protected]> Acked-by: FUJITA Tomonori <[email protected]> Signed-off-by: Kumar Gala <[email protected]> Cc: [email protected] Cc: [email protected] LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-04-08swiotlb: fix compile warningBecky Bruce1-1/+1
Squash a build warning seen on 32-bit powerpc caused by calling min() with 2 different types. Use min_t() instead. Signed-off-by: Becky Bruce <[email protected]> Acked-by: FUJITA Tomonori <[email protected]> Signed-off-by: Kumar Gala <[email protected]> Cc: [email protected] Cc: [email protected] LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-04-08swiotlb: comment correctionsBecky Bruce1-13/+11
Impact: cleanup swiotlb_map/unmap_single are now swiotlb_map/unmap_page; trivially change all the comments to reference new names. Also, there were some comments that should have been referring to just plain old map_single, not swiotlb_map_single; fix those as well. Also change a use of the word "pointer", when what is referred to is actually a dma/physical address. Signed-off-by: Becky Bruce <[email protected]> Acked-by: FUJITA Tomonori <[email protected]> Signed-off-by: Kumar Gala <[email protected]> Cc: [email protected] Cc: [email protected] LKML-Reference: <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-04-07dma-mapping: replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32)Yang Hongyang1-1/+1
Replace all DMA_32BIT_MASK macro with DMA_BIT_MASK(32) Signed-off-by: Yang Hongyang<[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2009-01-11swiotlb: do not use sg_virt()Ian Campbell1-7/+7
Scatterlists containing HighMem pages do not have a useful virtual address. Use the physical address instead. Signed-off-by: Ian Campbell <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-01-11swiotlb: range_needs_mapping should take a physical address.Ian Campbell1-5/+5
The swiotlb_arch_range_needs_mapping() hook should take a physical address rather than a virtual address in order to support highmem pages. Signed-off-by: Ian Campbell <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-01-11Merge branch 'linus' into core/iommuIngo Molnar1-1/+1
2009-01-06Merge branch 'core-fixes-for-linus' of ↵Linus Torvalds1-137/+100
git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip * 'core-fixes-for-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/tip/linux-2.6-tip: rcu: fix rcutorture bug rcu: eliminate synchronize_rcu_xxx macro rcu: make treercu safe for suspend and resume rcu: fix rcutree grace-period-latency bug on small systems futex: catch certain assymetric (get|put)_futex_key calls futex: make futex_(get|put)_key() calls symmetric locking, percpu counters: introduce separate lock classes swiotlb: clean up EXPORT_SYMBOL usage swiotlb: remove unnecessary declaration swiotlb: replace architecture-specific swiotlb.h with linux/swiotlb.h swiotlb: add support for systems with highmem swiotlb: store phys address in io_tlb_orig_addr array swiotlb: add hwdev to swiotlb_phys_to_bus() / swiotlb_sg_to_bus()
2009-01-06swiotlb: struct device - replace bus_id with dev_name(), dev_set_name()Kay Sievers1-1/+1
Signed-off-by: Kay Sievers <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2009-01-06x86, ia64: remove duplicated swiotlb codeFUJITA Tomonori1-30/+18
This adds swiotlb_map_page and swiotlb_unmap_page to lib/swiotlb.c and remove IA64 and X86's swiotlb_map_page and swiotlb_unmap_page. This also removes unnecessary swiotlb_map_single, swiotlb_map_single_attrs, swiotlb_unmap_single and swiotlb_unmap_single_attrs. Signed-off-by: FUJITA Tomonori <[email protected]> Acked-by: Tony Luck <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-01-06x86, ia64: convert to use generic dma_map_ops structFUJITA Tomonori1-8/+10
This converts X86 and IA64 to use include/linux/dma-mapping.h. It's a bit large but pretty boring. The major change for X86 is converting 'int dir' to 'enum dma_data_direction dir' in DMA mapping operations. The major changes for IA64 is using map_page and unmap_page instead of map_single and unmap_single. Signed-off-by: FUJITA Tomonori <[email protected]> Acked-by: Tony Luck <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-01-05Merge branch 'core/iommu' into core/urgentIngo Molnar1-137/+100
Conflicts: lib/swiotlb.c
2009-01-04swiotlb: Don't include linux/swiotlb.h twice in lib/swiotlb.cJesper Juhl1-1/+0
There's no point in including the linux/swiotlb.h header twice in lib/swiotlb.c - this patch gets rid of the unneeded include. Signed-off-by: Jesper Juhl <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2009-01-02swiotlb: add missing __init annotationsRoland Dreier1-1/+1
Impact: cleanup, reduce kernel size a bit The current kernel build warns: WARNING: vmlinux.o(.text+0x11458): Section mismatch in reference from the function swiotlb_alloc_boot() to the function .init.text:__alloc_bootmem_low() The function swiotlb_alloc_boot() references the function __init __alloc_bootmem_low(). This is often because swiotlb_alloc_boot lacks a __init annotation or the annotation of __alloc_bootmem_low is wrong. WARNING: vmlinux.o(.text+0x1011f2): Section mismatch in reference from the function swiotlb_late_init_with_default_size() to the function .init.text:__alloc_bootmem_low() The function swiotlb_late_init_with_default_size() references the function __init __alloc_bootmem_low(). This is often because swiotlb_late_init_with_default_size lacks a __init annotation or the annotation of __alloc_bootmem_low is wrong. and indeed the functions calling __alloc_bootmem_low() can be marked __init as well. Signed-off-by: Roland Dreier <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-28swiotlb: clean up EXPORT_SYMBOL usageFUJITA Tomonori1-14/+14
Impact: cleanup swiotlb uses EXPORT_SYMBOL in an inconsistent way. Some functions use EXPORT_SYMBOL at the end of functions. Some use it at the end of swiotlb.c. This cleans up swiotlb to use EXPORT_SYMBOL in a consistent way (at the end of functions). Signed-off-by: FUJITA Tomonori <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-28swiotlb: remove unnecessary declarationFUJITA Tomonori1-3/+0
Impact: cleanup Signed-off-by: FUJITA Tomonori <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-28swiotlb: add support for systems with highmemBecky Bruce1-17/+51
Impact: extend code for highmem - existing users unaffected On highmem systems, the original dma buffer might not have a virtual mapping - we need to kmap it in to perform the bounce. Extract the code that does the actual copy into a function that does the kmap if highmem is enabled, and default to the normal swiotlb memcpy if not. [ ported by Jeremy Fitzhardinge <[email protected]> ] Signed-off-by: Becky Bruce <[email protected]> Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-28swiotlb: store phys address in io_tlb_orig_addr arrayBecky Bruce1-90/+30
Impact: refactor code, cleanup When we enable swiotlb for platforms that support HIGHMEM, we can no longer store the virtual address of the original dma buffer, because that buffer might not have a permament mapping. Change the swiotlb code to instead store the physical address of the original buffer. Signed-off-by: Becky Bruce <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-28swiotlb: add hwdev to swiotlb_phys_to_bus() / swiotlb_sg_to_bus()Jeremy Fitzhardinge1-31/+22
Impact: extend functions with a (yet unused) parameter, update callsites Some architectures need it - in preparation for highmem swiotlb. Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-17swiotlb: consolidate swiotlb info message printingIan Campbell1-5/+28
Impact: clean up swiotlb printks Remove duplicated swiotlb info printing, and make it more detailed. Signed-off-by: Ian Campbell <[email protected]> Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-17swiotlb: support bouncing of HighMem pagesJeremy Fitzhardinge1-33/+89
Impact: prepare the swiotlb code for HighMem struct pages This requires us to treat DMA regions in terms of page+offset rather than virtual addressing since a HighMem page may not have a mapping. Signed-off-by: Ian Campbell <[email protected]> Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-17swiotlb: factor out copy to/from deviceJeremy Fitzhardinge1-4/+13
Impact: generalize IO bounce memcpys Signed-off-by: Ian Campbell <[email protected]> Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-17swiotlb: add arch hook to force mappingIan Campbell1-2/+13
Impact: generalize the sw-IOTLB range checks Some architectures require special rules to determine whether a range needs mapping or not. This adds a weak function for architectures to override. Signed-off-by: Ian Campbell <[email protected]> Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-17swiotlb: allow architectures to override phys<->bus<->phys conversionsIan Campbell1-16/+36
Impact: generalize phys<->bus<->phys conversions in the swiotlb code Architectures may need to override these conversions. Implement a __weak hook point containing the default implementation. Signed-off-by: Ian Campbell <[email protected]> Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-17swiotlb: add comment where we handle the overflow of a dma mask on 32 bitIan Campbell1-0/+4
Impact: cleanup Signed-off-by: Ian Campbell <[email protected]> Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-16swiotlb: move some definitions to headerIan Campbell1-13/+1
Impact: cleanup Signed-off-by: Ian Campbell <[email protected]> Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-12-16swiotlb: allow architectures to override swiotlb pool allocationJeremy Fitzhardinge1-3/+13
Impact: generalize swiotlb allocation code Architectures may need to allocate memory specially for use with the swiotlb. Create the weak function swiotlb_alloc_boot() and swiotlb_alloc() defaulting to the current behaviour. Signed-off-by: Jeremy Fitzhardinge <[email protected]> Signed-off-by: Ian Campbell <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-11-17swiotlb: use coherent_dma_mask in alloc_coherentFUJITA Tomonori1-3/+7
Impact: fix DMA buffer allocation coherency bug in certain configs This patch fixes swiotlb to use dev->coherent_dma_mask in swiotlb_alloc_coherent(). coherent_dma_mask is a subset of dma_mask (equal to it most of the time), enumerating the address range that a given device is able to DMA to/from in a cache-coherent way. But currently, swiotlb uses dev->dma_mask in alloc_coherent() implicitly via address_needs_mapping(), but alloc_coherent is really supposed to use coherent_dma_mask. This bug could break drivers that uses smaller coherent_dma_mask than dma_mask (though the current code works for the majority that use the same mask for coherent_dma_mask and dma_mask). Signed-off-by: FUJITA Tomonori <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2008-10-23swiotlb: remove panic for alloc_coherent failureFUJITA Tomonori1-2/+4
swiotlb_alloc_coherent calls panic() when allocated swiotlb pages is not fit for a device's dma mask. However, alloc_coherent failure is not a disaster at all. AFAIK, none of other x86 and IA64 IOMMU implementations don't crash in case of alloc_coherent failure. There are some drivers that don't check alloc_coherent failure but not many (about ten and I've already started to fix some of them). alloc_coherent returns NULL in case of failure so it's likely that these guilty drivers crash immediately. So swiotlb doesn't need to call panic() just for them. Reported-by: Takashi Iwai <[email protected]> Signed-off-by: FUJITA Tomonori <[email protected]> Tested-by: Takashi Iwai <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-09-19convert swiotlb to use dma_get_maskFUJITA Tomonori1-5/+1
swiotlb can use dma_get_mask() instead of the homegrown function. Signed-off-by: FUJITA Tomonori <[email protected]> Cc: [email protected] Signed-off-by: Ingo Molnar <[email protected]>
2008-09-10swiotlb: convert swiotlb to use is_buffer_dma_capable helper functionFUJITA Tomonori1-7/+8
Signed-off-by: FUJITA Tomonori <[email protected]> Acked-by: Joerg Roedel <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-09-08swiotlb: add is_swiotlb_buffer helper functionFUJITA Tomonori1-5/+9
This adds is_swiotlb_buffer() helper function to see whether a buffer belongs to the swiotlb buffer or not. Signed-off-by: FUJITA Tomonori <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-09-08swiotlb: use unmap_single instead of swiotlb_unmap_single in ↵FUJITA Tomonori1-1/+1
swiotlb_free_coherent We don't need any check in swiotlb_unmap_single here. unmap_single is appropriate. Signed-off-by: FUJITA Tomonori <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-09-08swiotlb: use map_single instead of swiotlb_map_single in swiotlb_alloc_coherentFUJITA Tomonori1-5/+2
We always need swiotlb memory here so address_needs_mapping and swiotlb_force testings are irrelevant. map_single should be used here instead of swiotlb_map_single. Signed-off-by: FUJITA Tomonori <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-09-08swiotlb: remove GFP_DMA hack in swiotlb_alloc_coherentFUJITA Tomonori1-7/+0
The callers are supposed to set up the gfp flags appropriately. Signed-off-by: FUJITA Tomonori <[email protected]> Acked-by: Joerg Roedel <[email protected]> Signed-off-by: Ingo Molnar <[email protected]>
2008-07-26dma-mapping: add the device argument to dma_mapping_error()FUJITA Tomonori1-2/+2
Add per-device dma_mapping_ops support for CONFIG_X86_64 as POWER architecture does: This enables us to cleanly fix the Calgary IOMMU issue that some devices are not behind the IOMMU (http://lkml.org/lkml/2008/5/8/423). I think that per-device dma_mapping_ops support would be also helpful for KVM people to support PCI passthrough but Andi thinks that this makes it difficult to support the PCI passthrough (see the above thread). So I CC'ed this to KVM camp. Comments are appreciated. A pointer to dma_mapping_ops to struct dev_archdata is added. If the pointer is non NULL, DMA operations in asm/dma-mapping.h use it. If it's NULL, the system-wide dma_ops pointer is used as before. If it's useful for KVM people, I plan to implement a mechanism to register a hook called when a new pci (or dma capable) device is created (it works with hot plugging). It enables IOMMUs to set up an appropriate dma_mapping_ops per device. The major obstacle is that dma_mapping_error doesn't take a pointer to the device unlike other DMA operations. So x86 can't have dma_mapping_ops per device. Note all the POWER IOMMUs use the same dma_mapping_error function so this is not a problem for POWER but x86 IOMMUs use different dma_mapping_error functions. The first patch adds the device argument to dma_mapping_error. The patch is trivial but large since it touches lots of drivers and dma-mapping.h in all the architecture. This patch: dma_mapping_error() doesn't take a pointer to the device unlike other DMA operations. So we can't have dma_mapping_ops per device. Note that POWER already has dma_mapping_ops per device but all the POWER IOMMUs use the same dma_mapping_error function. x86 IOMMUs use device argument. [[email protected]: fix sge] [[email protected]: fix svc_rdma] [[email protected]: build fix] [[email protected]: fix bnx2x] [[email protected]: fix s2io] [[email protected]: fix pasemi_mac] [[email protected]: fix sdhci] [[email protected]: build fix] [[email protected]: fix sparc] [[email protected]: fix ibmvscsi] Signed-off-by: FUJITA Tomonori <[email protected]> Cc: Muli Ben-Yehuda <[email protected]> Cc: Andi Kleen <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: Avi Kivity <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-04-29dma/ia64: update ia64 machvecs, swiotlb.cArthur Kepner1-8/+42
Change all ia64 machvecs to use the new dma_*map*_attrs() interfaces. Implement the old dma_*map_*() interfaces in terms of the corresponding new interfaces. For ia64/sn, make use of one dma attribute, DMA_ATTR_WRITE_BARRIER. Introduce swiotlb_*map*_attrs() functions. Signed-off-by: Arthur Kepner <[email protected]> Cc: Tony Luck <[email protected]> Cc: Jesse Barnes <[email protected]> Cc: Jes Sorensen <[email protected]> Cc: Randy Dunlap <[email protected]> Cc: Roland Dreier <[email protected]> Cc: James Bottomley <[email protected]> Cc: David Miller <[email protected]> Cc: Benjamin Herrenschmidt <[email protected]> Cc: Grant Grundler <[email protected]> Cc: Michael Ellerman <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-04-29swiotlb: use iommu_is_span_boundary helper functionFUJITA Tomonori1-11/+3
iommu_is_span_boundary in lib/iommu-helper.c was exported for PARISC IOMMUs (commit 3715863aa142c4f4c5208f5f3e5e9bac06006d2f). SWIOTLB can use it instead of the homegrown function. Signed-off-by: FUJITA Tomonori <[email protected]> Cc: Thomas Gleixner <[email protected]> Cc: Ingo Molnar <[email protected]> Cc: H. Peter Anvin <[email protected]> Cc: Tony Luck <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-04-29lib/swiotlb.c: cleanupsAndrew Morton1-46/+43
There's a pointlessly braced block of code in there. Remove the braces and save a tabstop. Cc: Andi Kleen <[email protected]> Cc: FUJITA Tomonori <[email protected]> Cc: Jan Beulich <[email protected]> Cc: Tony Luck <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-03-13avoid endless loops in lib/swiotlb.cJan Beulich1-14/+16
Commit 681cc5cd3efbeafca6386114070e0bfb5012e249 ("iommu sg merging: swiotlb: respect the segment boundary limits") introduced two possibilities for entering an endless loop in lib/swiotlb.c: - if max_slots is zero (possible if mask is ~0UL) - if the number of slots requested fits into a swiotlb segment, but is too large for the part of a segment which remains after considering offset_slots This fixes them Signed-off-by: Jan Beulich <[email protected]> Cc: FUJITA Tomonori <[email protected]> Cc: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2008-02-05iommu sg merging: swiotlb: respect the segment boundary limitsFUJITA Tomonori1-6/+35
This patch makes swiotlb not allocate a memory area spanning LLD's segment boundary. is_span_boundary() judges whether a memory area spans LLD's segment boundary. If map_single finds such a area, map_single tries to find the next available memory area. Signed-off-by: FUJITA Tomonori <[email protected]> Cc: James Bottomley <[email protected]> Cc: Jens Axboe <[email protected]> Cc: Greg KH <[email protected]> Cc: Jeff Garzik <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-10-22Update swiotlb to use sg helpersJens Axboe1-1/+1
Signed-off-by: Jens Axboe <[email protected]>
2007-10-17swiotlb: fix map_sg failure handlingFUJITA Tomonori1-1/+1
sg list elements might not be continuous. Signed-off-by: FUJITA Tomonori <[email protected]> Signed-off-by: Jens Axboe <[email protected]>
2007-10-16swiotlb: sg chaining supportJens Axboe1-7/+12
Signed-off-by: Jens Axboe <[email protected]>
2007-10-12dma_free_coherent() needs irqs enabled (sigh)David Brownell1-0/+1
On at least ARM (and I'm told MIPS too) dma_free_coherent() has a newish call context requirement: unlike its dma_alloc_coherent() sibling, it may not be called with IRQs disabled. (This was new behavior on ARM as of late 2005, caused by ARM SMP updates.) This little surprise can be annoyingly driver-visible. Since it looks like that restriction won't be removed, this patch changes the definition of the API to include that requirement. Also, to help catch nonportable drivers, it updates the x86 and swiotlb versions to include the relevant warnings. (I already observed that it trips on the bus_reset_tasklet of the new firewire_ohci driver.) Signed-off-by: David Brownell <[email protected]> Cc: David Miller <[email protected]> Acked-by: Russell King <[email protected]> Cc: Andi Kleen <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Greg Kroah-Hartman <[email protected]>
2007-07-21Fix swiotlb_sync_single_range()Keir Fraser1-1/+4
If the swiotlb maps a multi-slab region, swiotlb_sync_single_range() can be invoked to sync a sub-region which does not include the first slab. Unfortunately io_tlb_orig_addr[] is only initialised for the first slab, and hence the call to sync_single() will read a garbage orig_addr in this case. This patch fixes the issue by initialising all mapped slabs in io_tlb_orig_addr[]. It also correctly adjusts the buffer pointer in sync_single() to handle the case that the given dma_addr is not aligned on a slab boundary. Signed-off-by: Keir Fraser <[email protected]> Cc: "Luck, Tony" <[email protected]> Acked-by: Andi Kleen <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-05-08fix section mismatch warning in lib/swiotlb.cSam Ravnborg1-1/+0
kbuild spits outs following warning on a defconfig x86_64 build: WARNING: swiotlb.o - Section mismatch: reference to .init.text:swiotlb_init from __ksymtab between '__ksymtab_swiotlb_init' (at offset 0xa0) and '__ksymtab_swiotlb_free_coherent' This warning happens because the function swiotlb_init is marked __init and EXPORT_SYMBOL(). A 'git grep swiotlb_init' showed no users in drivers/ so remove the EXPORT_SYMBOL. Signed-off-by: Sam Ravnborg <[email protected]> Cc: Andi Kleen <[email protected]> Cc: "Luck, Tony" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-03-06Revert "[IA64] swiotlb abstraction (e.g. for Xen)"Tony Luck1-149/+35
This reverts commit 51099005ab8e09d68a13fea8d55bc739c1040ca6.
2007-02-12[PATCH] swiotlb uninliningsAndrew Morton1-4/+4
Optimise swiotlb.c for size. text data bss dec hex filename 5009 89 64 5162 142a lib/swiotlb.o-before 4666 89 64 4819 12d3 lib/swiotlb.o-after For some reason my gcc (4.0.2) doesn't want to tailcall these things. swiotlb_sync_sg_for_device: pushq %rbp # movl $1, %r8d #, movq %rsp, %rbp #, call swiotlb_sync_sg # leave ret .size swiotlb_sync_sg_for_device, .-swiotlb_sync_sg_for_device .section .text.swiotlb_sync_sg_for_cpu,"ax",@progbits .globl swiotlb_sync_sg_for_cpu .type swiotlb_sync_sg_for_cpu, @function swiotlb_sync_sg_for_cpu: pushq %rbp # xorl %r8d, %r8d # movq %rsp, %rbp #, call swiotlb_sync_sg # leave ret Cc: Jan Beulich <[email protected]> Cc: Andi Kleen <[email protected]> Cc: "Luck, Tony" <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Linus Torvalds <[email protected]>
2007-02-05[IA64] swiotlb abstraction (e.g. for Xen)Jan Beulich1-35/+149
Add abstraction so that the file can be used by environments other than IA64 and EM64T, namely for Xen. Signed-off-by: Jan Beulich <[email protected]> Signed-off-by: Andrew Morton <[email protected]> Signed-off-by: Tony Luck <[email protected]>