mirror_zfs/module/lua/setjmp/setjmp_x86_64.S

73 lines
1.8 KiB
ArmAsm
Raw Normal View History

/*
* CDDL HEADER START
*
* The contents of this file are subject to the terms of the
* Common Development and Distribution License (the "License").
* You may not use this file except in compliance with the License.
*
* You can obtain a copy of the license at usr/src/OPENSOLARIS.LICENSE
* or https://opensource.org/licenses/CDDL-1.0.
* See the License for the specific language governing permissions
* and limitations under the License.
*
* When distributing Covered Code, include this CDDL HEADER in each
* file and include the License file at usr/src/OPENSOLARIS.LICENSE.
* If applicable, add the following below this CDDL HEADER, with the
* fields enclosed by brackets "[]" replaced with your own identifying
* information: Portions Copyright [yyyy] [name of copyright owner]
*
* CDDL HEADER END
*/
/*
* Copyright (c) 1992, 2010, Oracle and/or its affiliates. All rights reserved.
*/
icp: properly fix all RETs in x86_64 Asm code Commit 43569ee37420 ("Fix objtool: missing int3 after ret warning") addressed replacing all `ret`s in x86 asm code to a macro in the Linux kernel in order to enable SLS. That was done by copying the upstream macro definitions and fixed objtool complaints. Since then, several more mitigations were introduced, including Rethunk. It requires to have a jump to one of the thunks in order to work, so the RET macro was changed again. And, as ZFS code didn't use the mainline defition, but copied it, this is currently missing. Objtool reminds about it time to time (Clang 16, CONFIG_RETHUNK=y): fs/zfs/lua/zlua.o: warning: objtool: setjmp+0x25: 'naked' return found in RETHUNK build fs/zfs/lua/zlua.o: warning: objtool: longjmp+0x27: 'naked' return found in RETHUNK build Do it the following way: * if we're building under Linux, unconditionally include <linux/linkage.h> in the related files. It is available in x86 sources since even pre-2.6 times, so doesn't need any conftests; * then, if RET macro is available, it will be used directly, so that we will always have the version actual to the kernel we build; * if there's no such macro, we define it as a simple `ret`, as it was on pre-SLS times. This ensures we always have the up-to-date definition with no need to update it manually, and at the same time is safe for the whole variety of kernels ZFS module supports. Then, there's a couple more "naked" rets left in the code, they're just defined as: .byte 0xf3,0xc3 In fact, this is just: rep ret `rep ret` instead of just `ret` seems to mitigate performance issues on some old AMD processors and most likely makes no sense as of today. Anyways, address those rets, so that they will be protected with Rethunk and SLS. Include <sys/asm_linkage.h> here which now always has RET definition and replace those constructs with just RET. This wipes the last couple of places with unpatched rets objtool's been complaining about. Reviewed-by: Attila Fülöp <attila@fueloep.org> Reviewed-by: Tino Reichardt <milky-zfs@mcmilk.de> Reviewed-by: Richard Yao <richard.yao@alumni.stonybrook.edu> Reviewed-by: Brian Behlendorf <behlendorf1@llnl.gov> Signed-off-by: Alexander Lobakin <alobakin@pm.me> Closes #14035
2022-10-16 17:53:22 +03:00
#if defined(_KERNEL) && defined(__linux__)
#include <linux/linkage.h>
#endif
/*
* Setjmp and longjmp implement non-local gotos using state vectors
* type label_t.
*/
#ifdef __x86_64__
#define _ASM
#include <sys/asm_linkage.h>
ENTRY_ALIGN(setjmp, 8)
movq %rsp, 0(%rdi)
movq %rbp, 8(%rdi)
movq %rbx, 16(%rdi)
movq %r12, 24(%rdi)
movq %r13, 32(%rdi)
movq %r14, 40(%rdi)
movq %r15, 48(%rdi)
movq 0(%rsp), %rdx /* return address */
movq %rdx, 56(%rdi) /* rip */
xorl %eax, %eax /* return 0 */
RET
SET_SIZE(setjmp)
ENTRY_ALIGN(longjmp, 8)
movq 0(%rdi), %rsp
movq 8(%rdi), %rbp
movq 16(%rdi), %rbx
movq 24(%rdi), %r12
movq 32(%rdi), %r13
movq 40(%rdi), %r14
movq 48(%rdi), %r15
movq 56(%rdi), %rdx /* return address */
movq %rdx, 0(%rsp)
xorl %eax, %eax
incl %eax /* return 1 */
RET
SET_SIZE(longjmp)
#ifdef __ELF__
.section .note.GNU-stack,"",%progbits
#endif
#endif /* __x86_64__ */