From patchwork Mon Dec 1 17:04:40 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 778 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608871; bh=imlPRWYNPVtPfr/ewB8IhCbLhnLKVZEFTZJhG/ynoEk=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=tS7NK7KTgfSkAng2ZscThZFUIHN7FqbkhMe9/iXdvpBTfsC2X6lZVQ5xs27YZAot+ 7B4MU2PkMGec/fRc11xPgoLromfD7HkKKnQAXqVcXEA7v5Qs56WgM3RDrIYQzpO5C4 VP4/G9WqpWL1u/EqdfDEd6wtxTx2BcVbvUJdFmfAUnBF4ROsVsIS9wlSCS0nQEjAmF yTf82kPbvoFrArjAonQhQSQYcpyLywqsGraJTXKacd0ilH3VdtCFJRpN4GcroE5MFK 55d1/B9UgUXpT/IV0AtkHLB550OoP0k7TWBr1TNaFdQRVv/SLU5xw3T4nlY45PNX9T eV5YQgIEjPbaQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 540AF688B8 for ; Mon, 1 Dec 2025 10:07:51 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id zKri0jvRpsx1 for ; Mon, 1 Dec 2025 10:07:51 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608871; bh=imlPRWYNPVtPfr/ewB8IhCbLhnLKVZEFTZJhG/ynoEk=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=tS7NK7KTgfSkAng2ZscThZFUIHN7FqbkhMe9/iXdvpBTfsC2X6lZVQ5xs27YZAot+ 7B4MU2PkMGec/fRc11xPgoLromfD7HkKKnQAXqVcXEA7v5Qs56WgM3RDrIYQzpO5C4 VP4/G9WqpWL1u/EqdfDEd6wtxTx2BcVbvUJdFmfAUnBF4ROsVsIS9wlSCS0nQEjAmF yTf82kPbvoFrArjAonQhQSQYcpyLywqsGraJTXKacd0ilH3VdtCFJRpN4GcroE5MFK 55d1/B9UgUXpT/IV0AtkHLB550OoP0k7TWBr1TNaFdQRVv/SLU5xw3T4nlY45PNX9T eV5YQgIEjPbaQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 406E768891 for ; Mon, 1 Dec 2025 10:07:51 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608869; bh=q/y/cn1VfBUUwPs3UKij6gVuDFWLzDKjN3dkb3VU6rU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FKql60EmxM3Q2uyXRmxujfbkFYLCWJxWChM/3/tmo/jJy9S9k0S3boyGKISKzR412 mQWwkYh/XrCq9X8dwDc2PGnVqt4uQAF/fw5679okkBEUD8LFZISzREF2zso+UuEQTW 4AJj/LnxdY9IuIXgp2ofzRGC3aLspx3rQMqz1fg5wr3ZxtFhzD7yYkm5VHVAjdpoZe dnugddEFSY62QC9fQ1+eGrqOrtfY5QgiSK6Fw3icAQ/jDUF9OobLOHPRWEcagk/U3C kOAb482m1DevJihDbm9EUQjkGHy23/PDtgdElahAukWwkT21HWzqaMjpQ9VojCXmG2 yFnU/+/zw/h4w== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 3634D5E439; Mon, 1 Dec 2025 10:07:49 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id NR8RgTgHTdtp; Mon, 1 Dec 2025 10:07:49 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608863; bh=+4zDWFeIlkcuEfKQwk85StriYanvGeEkp2l4ibuqLiM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EFVvscjAmZsOHf6C0MdHnYXohsIOFewe99IHd3XdTB46+2A9gPdIqhSGGBtfyRGvT rVWeKT0NCR/p5TN4LDchn0+cavcIz4Y+YQBbNlUGtReTVU/Zh5PdUe4yODxKnQPPkO Xoa5lFz2o8TxUaylscjCS10t09+VCwNne+o+hKsTSWvz9PS0eAKVbEX59HC5+ZoHyV gvac7We7e4VzEVp/KjCjl4llivfYP9ot/bprVdiMBOkR10SpwYyCoFwuP58H6pXk2k iz6p21axEVTnrCIdQmJsiDqtRvVQCi+kf6jNiM72K98wm5A1S4yTJp0g5FBpQUFgQK uXgxCg2rCdrbQ== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 9ADE56881A; Mon, 1 Dec 2025 10:07:43 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:40 -0700 Message-ID: <20251201170529.3237986-2-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: E2INXKDGM2EBUJETTTX25RQLSMX2E2CO X-Message-ID-Hash: E2INXKDGM2EBUJETTTX25RQLSMX2E2CO X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 01/37] test: hooks: Add a symlink for tasman List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add a symlink to ellesmere so we can run tests on tasman. Signed-off-by: Simon Glass --- test/hooks/bin/tasman | 1 + 1 file changed, 1 insertion(+) create mode 120000 test/hooks/bin/tasman diff --git a/test/hooks/bin/tasman b/test/hooks/bin/tasman new file mode 120000 index 00000000000..784d574a1e1 --- /dev/null +++ b/test/hooks/bin/tasman @@ -0,0 +1 @@ +ellesmere \ No newline at end of file From patchwork Mon Dec 1 17:04:41 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 779 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608875; bh=ASCnB/rrhlRci/NNbwHVTH+ze+VC5zSGCKMKbxc/BRs=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=qWuz3hj85RVFkz7e+AAltlwH5+w/pwgngQxr8hoGE/MAVyuCb03O7hJY57mXSiBfy nM4p6/ZPp13KbqfBPpNm0p93MwzjxRkxZGl8ndRxIXXBxi/9EkfgAt1WBwWkDsR0UC zVJjJPDKjrH++wz7rMpfqphx23xrajKKyM5VPpITCNSu8bha8yULOoqav6KOeoB8UV RcRBs8DPTd4VePinvk+FWNn0x4DzRMIMjlitjHR3SyOo5oBeS4fcDk0q2H1SULkJw+ r+TubZ+3ptM5y1ToJPJWL2d15R4NuNMYNP58bhhLuY1TgQEwmEbrTwDdUaUgxpi+tY nA2zx/vQ/3EGQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id C325A5E439 for ; Mon, 1 Dec 2025 10:07:55 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id eVNM4c3jThEP for ; Mon, 1 Dec 2025 10:07:55 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608875; bh=ASCnB/rrhlRci/NNbwHVTH+ze+VC5zSGCKMKbxc/BRs=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=qWuz3hj85RVFkz7e+AAltlwH5+w/pwgngQxr8hoGE/MAVyuCb03O7hJY57mXSiBfy nM4p6/ZPp13KbqfBPpNm0p93MwzjxRkxZGl8ndRxIXXBxi/9EkfgAt1WBwWkDsR0UC zVJjJPDKjrH++wz7rMpfqphx23xrajKKyM5VPpITCNSu8bha8yULOoqav6KOeoB8UV RcRBs8DPTd4VePinvk+FWNn0x4DzRMIMjlitjHR3SyOo5oBeS4fcDk0q2H1SULkJw+ r+TubZ+3ptM5y1ToJPJWL2d15R4NuNMYNP58bhhLuY1TgQEwmEbrTwDdUaUgxpi+tY nA2zx/vQ/3EGQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id B2B75688A7 for ; Mon, 1 Dec 2025 10:07:55 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608873; bh=wUB0C54cVEe9gmDWmFK546MbZeKmWPz3p8nj7bVYz7g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=v6Cb4wGjVvz+8IGIa9L+zV8CSI3PTtR5j2m18EXJTNtKRzS/ebGLniZ7Ll3o/uRRo XgPy39ZKTvSzKrQhcC4CKRehVCHg0uiKx0IkS9WXC1v0DmBINDpgVeEJZqG6PeAY4r jsb807nwLvkhSdOdVJHCeY6WB1hI7RbqfQTiHX3QIPD7w1vJHOzEcCdabbkUpoPQ9R qVAz5O1Bwiq86YkdoVySv3W/rTHJNtT6yxxTlTpsojGPafb9FvVOMgTqHrM975vF7n DK/RqPzxcr98C50DZxAMaQBMtkWteC4BJB22yyjtN9I6UEIELQmTmy2jUdehfTiYx3 mfB0OYPPVo3cQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id B585E68891; Mon, 1 Dec 2025 10:07:53 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id ceCGOmJDhOjT; Mon, 1 Dec 2025 10:07:53 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608868; bh=MmvdXpppQ8K/n9yw7zdIXF04T/iitSCtiv+sr8gbLJ8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=XlXzVjzbhZesPtqHH8E8FwelJ0rwGu0BHKSlxcTtMkgaIcrgh4oQO+1agdevQA0ZW mj9ZwGsRzhZ3V8MdS9GiH4ijDzflibqiZw+s8Ki8Q1rTNkF5nC+y2/5TY+VzdqzJqC e/C91d9r5dkN4Kxkey8CGD4+POhi0RO1f4vgDukn3yIQqcJnc0215nKvYi1pG30FST YltbniI8Oj6E8HSGvoQxrUZJxaV4qJQAE9sFXwWy/d0PLO4ev7Tp8zyTHMpya3x9ye z/Y4M4tZbfCzn4lgs89ma0jNrzkwYgb6HRHc30iREAMFD5ztabOq6lOcpYAbaWvsm2 /13RbW/lMtRvw== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 3B82068821; Mon, 1 Dec 2025 10:07:48 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:41 -0700 Message-ID: <20251201170529.3237986-3-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: JPQNZKZ5TYLP5BC6FOSRXJM32GIXM5V5 X-Message-ID-Hash: JPQNZKZ5TYLP5BC6FOSRXJM32GIXM5V5 X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 02/37] treewide: Add missing string.h includes List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add string.h to files that use string functions like strdup, strcmp, strcpy, etc. These are implicitly available through the malloc.h header but that will soon change. For bouncebuf, take this opportunity to sort the headers correctly. Co-developed-by: Claude Signed-off-by: Simon Glass --- arch/arm/mach-zynq/slcr.c | 1 + board/ti/common/cape_detect.c | 1 + boot/expo_build_cb.c | 1 + cmd/printf.c | 1 + common/bouncebuf.c | 5 +++-- common/iomux.c | 1 + common/menu.c | 1 + drivers/crypto/fsl/desc_constr.h | 1 + drivers/crypto/fsl/error.c | 1 + drivers/crypto/fsl/fsl_blob.c | 1 + drivers/crypto/fsl/fsl_hash.c | 1 + drivers/dma/apbh_dma.c | 1 + drivers/fpga/versalpl.c | 1 + drivers/net/fsl-mc/dpio/qbman_portal.c | 1 + drivers/net/qe/uccf.c | 1 + drivers/spi/spi-mem-nodm.c | 1 + drivers/video/imx/ipu_common.c | 1 + lib/circbuf.c | 1 + lib/crypto/x509_helper.c | 2 ++ lib/dhry/dhry_1.c | 1 + lib/libavb/avb_sysdeps_posix.c | 1 + lib/linux_compat.c | 1 + lib/list_sort.c | 1 + lib/mbedtls/mscode_parser.c | 1 + lib/membuf.c | 1 + lib/strto.c | 1 + 26 files changed, 29 insertions(+), 2 deletions(-) diff --git a/arch/arm/mach-zynq/slcr.c b/arch/arm/mach-zynq/slcr.c index ef877df0fe8..b69d5aad961 100644 --- a/arch/arm/mach-zynq/slcr.c +++ b/arch/arm/mach-zynq/slcr.c @@ -5,6 +5,7 @@ #include #include +#include #include #include diff --git a/board/ti/common/cape_detect.c b/board/ti/common/cape_detect.c index da805befabc..4984f7a3a22 100644 --- a/board/ti/common/cape_detect.c +++ b/board/ti/common/cape_detect.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include "cape_detect.h" diff --git a/boot/expo_build_cb.c b/boot/expo_build_cb.c index 442ad760e79..6dd3dbd92b8 100644 --- a/boot/expo_build_cb.c +++ b/boot/expo_build_cb.c @@ -14,6 +14,7 @@ #include #include #include +#include #include #include diff --git a/cmd/printf.c b/cmd/printf.c index a1727ac15a2..52f21c8b842 100644 --- a/cmd/printf.c +++ b/cmd/printf.c @@ -89,6 +89,7 @@ #include #include #include +#include #include #define WANT_HEX_ESCAPES 0 diff --git a/common/bouncebuf.c b/common/bouncebuf.c index b2f87e4d939..5a7d3efa521 100644 --- a/common/bouncebuf.c +++ b/common/bouncebuf.c @@ -5,11 +5,12 @@ * Copyright (C) 2012 Marek Vasut */ +#include #include +#include #include #include -#include -#include +#include #include #include diff --git a/common/iomux.c b/common/iomux.c index 1224c15eb71..e488934b29f 100644 --- a/common/iomux.c +++ b/common/iomux.c @@ -7,6 +7,7 @@ #include #include #include +#include #if CONFIG_IS_ENABLED(CONSOLE_MUX) void iomux_printdevs(const int console) diff --git a/common/menu.c b/common/menu.c index 5a2126aa01a..b66803337d3 100644 --- a/common/menu.c +++ b/common/menu.c @@ -7,6 +7,7 @@ #include #include #include +#include #include #include #include diff --git a/drivers/crypto/fsl/desc_constr.h b/drivers/crypto/fsl/desc_constr.h index 209557c4ffa..ce938d49887 100644 --- a/drivers/crypto/fsl/desc_constr.h +++ b/drivers/crypto/fsl/desc_constr.h @@ -7,6 +7,7 @@ * Based on desc_constr.h file in linux drivers/crypto/caam */ +#include #include #include "desc.h" diff --git a/drivers/crypto/fsl/error.c b/drivers/crypto/fsl/error.c index dfcf5dbab35..9008dccb27c 100644 --- a/drivers/crypto/fsl/error.c +++ b/drivers/crypto/fsl/error.c @@ -9,6 +9,7 @@ #include #include +#include #include #include "desc.h" #include "jr.h" diff --git a/drivers/crypto/fsl/fsl_blob.c b/drivers/crypto/fsl/fsl_blob.c index 0ecd6befd25..32beb03e8ae 100644 --- a/drivers/crypto/fsl/fsl_blob.c +++ b/drivers/crypto/fsl/fsl_blob.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include #include "jobdesc.h" diff --git a/drivers/crypto/fsl/fsl_hash.c b/drivers/crypto/fsl/fsl_hash.c index 79b32e2627c..ea90aece64b 100644 --- a/drivers/crypto/fsl/fsl_hash.c +++ b/drivers/crypto/fsl/fsl_hash.c @@ -8,6 +8,7 @@ #include #include #include +#include #include "jobdesc.h" #include "desc.h" #include "jr.h" diff --git a/drivers/dma/apbh_dma.c b/drivers/dma/apbh_dma.c index 331815c469f..89ff00540ae 100644 --- a/drivers/dma/apbh_dma.c +++ b/drivers/dma/apbh_dma.c @@ -16,6 +16,7 @@ #include #include +#include #include #include #include diff --git a/drivers/fpga/versalpl.c b/drivers/fpga/versalpl.c index 1957e8dcaca..2fba888b8cc 100644 --- a/drivers/fpga/versalpl.c +++ b/drivers/fpga/versalpl.c @@ -8,6 +8,7 @@ #include #include #include +#include #include #include #include diff --git a/drivers/net/fsl-mc/dpio/qbman_portal.c b/drivers/net/fsl-mc/dpio/qbman_portal.c index f4e82b0507c..d338fac4def 100644 --- a/drivers/net/fsl-mc/dpio/qbman_portal.c +++ b/drivers/net/fsl-mc/dpio/qbman_portal.c @@ -5,6 +5,7 @@ #include #include +#include #include #include #include "qbman_portal.h" diff --git a/drivers/net/qe/uccf.c b/drivers/net/qe/uccf.c index badf4e5db3e..ab411361722 100644 --- a/drivers/net/qe/uccf.c +++ b/drivers/net/qe/uccf.c @@ -8,6 +8,7 @@ #include #include +#include #include #include #include diff --git a/drivers/spi/spi-mem-nodm.c b/drivers/spi/spi-mem-nodm.c index 6d9ab61769a..6a79fda625b 100644 --- a/drivers/spi/spi-mem-nodm.c +++ b/drivers/spi/spi-mem-nodm.c @@ -8,6 +8,7 @@ #include #include #include +#include int spi_mem_exec_op(struct spi_slave *slave, const struct spi_mem_op *op) diff --git a/drivers/video/imx/ipu_common.c b/drivers/video/imx/ipu_common.c index bd1ef0a800d..40d578d3980 100644 --- a/drivers/video/imx/ipu_common.c +++ b/drivers/video/imx/ipu_common.c @@ -13,6 +13,7 @@ /* #define DEBUG */ #include #include +#include #include #include #include diff --git a/lib/circbuf.c b/lib/circbuf.c index 461c240f788..043b5a60d36 100644 --- a/lib/circbuf.c +++ b/lib/circbuf.c @@ -6,6 +6,7 @@ #include #include +#include #include diff --git a/lib/crypto/x509_helper.c b/lib/crypto/x509_helper.c index 87e8ff67ae1..bf79d42cd60 100644 --- a/lib/crypto/x509_helper.c +++ b/lib/crypto/x509_helper.c @@ -5,6 +5,8 @@ * Copyright (C) 2012 Red Hat, Inc. All Rights Reserved. * Written by David Howells (dhowells@redhat.com) */ + +#include #include #include #include diff --git a/lib/dhry/dhry_1.c b/lib/dhry/dhry_1.c index 275a89942ea..4287b57e316 100644 --- a/lib/dhry/dhry_1.c +++ b/lib/dhry/dhry_1.c @@ -44,6 +44,7 @@ char SCCSid[] = "@(#) @(#)dhry_1.c:3.4 -- 5/15/91 19:30:21"; #include #include +#include #include "dhry.h" diff --git a/lib/libavb/avb_sysdeps_posix.c b/lib/libavb/avb_sysdeps_posix.c index 6ffdb0b7eb3..1fde82be4d8 100644 --- a/lib/libavb/avb_sysdeps_posix.c +++ b/lib/libavb/avb_sysdeps_posix.c @@ -7,6 +7,7 @@ #include #include #include +#include #include "avb_sysdeps.h" diff --git a/lib/linux_compat.c b/lib/linux_compat.c index 985e88eb397..4df9db689ed 100644 --- a/lib/linux_compat.c +++ b/lib/linux_compat.c @@ -1,6 +1,7 @@ #include #include +#include #include #include diff --git a/lib/list_sort.c b/lib/list_sort.c index a6e54d5bc46..cf5cac17720 100644 --- a/lib/list_sort.c +++ b/lib/list_sort.c @@ -8,6 +8,7 @@ #include #include #include +#include #endif #include #include diff --git a/lib/mbedtls/mscode_parser.c b/lib/mbedtls/mscode_parser.c index c3805c6503c..956a5a47243 100644 --- a/lib/mbedtls/mscode_parser.c +++ b/lib/mbedtls/mscode_parser.c @@ -8,6 +8,7 @@ #include #include +#include #include #include diff --git a/lib/membuf.c b/lib/membuf.c index 207dff5625b..02b0cc8c6b0 100644 --- a/lib/membuf.c +++ b/lib/membuf.c @@ -9,6 +9,7 @@ #include #include #include +#include #include #include "membuf.h" diff --git a/lib/strto.c b/lib/strto.c index 206d1e91847..b7b3655f17f 100644 --- a/lib/strto.c +++ b/lib/strto.c @@ -11,6 +11,7 @@ #include #include +#include #include #include From patchwork Mon Dec 1 17:04:42 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 780 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608879; bh=nUmABWMuaGbREg/Dl4rSQ0aYMSKsBb0LX8yd34SnX7Q=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=FPxt5PdMf6JflQDxfQZy6H8Q3eTvmFfAW1JjK7Rxx2x5D+bMfyaLoB9AYpea/9r5E fbBHlQDZn9zU3Wu/6CJSdnUZ6P0wQozREsLvs0ZgwqZC8TByI0s5cdckYbX9M0tVZh DYn+13h56BB3udzVwm7jJjWI2TWvUsUtLdvt5InqLtnoZRu0SnGXhRaWqx82JeiNzW aR5OoPnu8yY02jjzZwGUy833oknxSlTw0S3ROzdy22Y9xO5h7Knu4pA+W9s23Z/qYp heUsBsQCd2twG8GjpKn1pnJ8EYSNDP+5fqUxJAZcUpJXZl0EYUaGX/V4W8f/4VmDqI h53EkPCmnHpSQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id C04E56881A for ; Mon, 1 Dec 2025 10:07:59 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 0f5WNNL0rh79 for ; Mon, 1 Dec 2025 10:07:59 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608879; bh=nUmABWMuaGbREg/Dl4rSQ0aYMSKsBb0LX8yd34SnX7Q=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=FPxt5PdMf6JflQDxfQZy6H8Q3eTvmFfAW1JjK7Rxx2x5D+bMfyaLoB9AYpea/9r5E fbBHlQDZn9zU3Wu/6CJSdnUZ6P0wQozREsLvs0ZgwqZC8TByI0s5cdckYbX9M0tVZh DYn+13h56BB3udzVwm7jJjWI2TWvUsUtLdvt5InqLtnoZRu0SnGXhRaWqx82JeiNzW aR5OoPnu8yY02jjzZwGUy833oknxSlTw0S3ROzdy22Y9xO5h7Knu4pA+W9s23Z/qYp heUsBsQCd2twG8GjpKn1pnJ8EYSNDP+5fqUxJAZcUpJXZl0EYUaGX/V4W8f/4VmDqI h53EkPCmnHpSQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id AA63E688B1 for ; Mon, 1 Dec 2025 10:07:59 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608877; bh=XIhm4DEXGH8futChBSHsm8uoivcJa5RF97iD1eipCtE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=HZUCurrVILETBr4h7RPgklYonF32zJZ3gsa7f394SwT0GziRXJgn7d2xQutgBPQN2 DZmkhe3MIUzyJiP5SUqCwJjmQogK7RC6Qj1Un5fPtsXYFhS+RIsLLEkB6mpJIlS2mH uFvy8Wvne/IRupkuf4i67CO6lrv9R4S7j3MmCz3Kv9P7jYWYjoFnW+2JvPwYSuj5Uw 9ZK72/vc+g/quufb0NisYD34TN4v+xPq354MGUb69cdO8mhvZnKPHpWor/HE69StyH sl+lC0l/HFEiJb4umNalT8pDiUFm74BCbTMCUb6lG9+zzPTxJNIKfrhOKpsvPXMdmb 49reLM9r4cbvw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 490D6688B1; Mon, 1 Dec 2025 10:07:57 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id DI97LUrAjTtI; Mon, 1 Dec 2025 10:07:57 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608873; bh=1K+1n3P7OjacsKfqYV9Ei/Mx//s+ZxLxfBzLiFC3Y58=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=skSd62jRZhFF9Alv+7ylsYK5H2xKGVcxbelPUxaCqoo4EfV15aTZNmn7sFIbCjnKV rX6msQKVsgfpBEwJjdrJ/BHLtWSuLDFeySz9+0hp3iOiODjoZS1Pq0eutfBpcXJufH vSTbybDfamhuKUtMsOlc42Y0ksySb8dSgxCeF93C2ty8KNpsVQ9Lnn/EZHm0C3bDtn yy1uJr1L8DJeLXobxk/Ro9/ziTbuBrinMVZ6JtLuFmwbBHJaBi5/467q9K5n/JBtU6 jmPC3NfTeDQqBHbZ/CJoCL7RjrOY5+AvCVIESSFPHiyznTFmPyyCTBHJ/moRwBLXft dwMWXmQCZtrCg== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id F172B6881A; Mon, 1 Dec 2025 10:07:52 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:42 -0700 Message-ID: <20251201170529.3237986-4-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: GQAVDBZQA3GV6REWB2Q3XJYOBTFXRPBK X-Message-ID-Hash: GQAVDBZQA3GV6REWB2Q3XJYOBTFXRPBK X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 03/37] imx8mp: Increase the BSS limit for a few boards List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass These boards have quite a low BSS limit of 1K. The new dlmalloc needs about 1K of BSS (instead of 2K of data), since its state has moved out of the data region. Increase the limit by 0.5K for these boards: imx8mp_data_modul_edm_sbc imx8mp_dhcom_drc02 imx8mp_dhcom_pdk2 imx8mp_dhcom_pdk3 imx8mp_dhcom_picoitx imx8mp_venice Signed-off-by: Simon Glass --- configs/imx8mp_data_modul_edm_sbc_defconfig | 2 +- configs/imx8mp_dhsom.config | 2 +- configs/imx8mp_venice_defconfig | 2 +- configs/venice2_defconfig | 1 + 4 files changed, 4 insertions(+), 3 deletions(-) diff --git a/configs/imx8mp_data_modul_edm_sbc_defconfig b/configs/imx8mp_data_modul_edm_sbc_defconfig index 8228bf59756..24764c89702 100644 --- a/configs/imx8mp_data_modul_edm_sbc_defconfig +++ b/configs/imx8mp_data_modul_edm_sbc_defconfig @@ -26,7 +26,7 @@ CONFIG_SPL_STACK=0x96fc00 CONFIG_SPL_TEXT_BASE=0x920000 CONFIG_SPL_HAS_BSS_LINKER_SECTION=y CONFIG_SPL_BSS_START_ADDR=0x96fc00 -CONFIG_SPL_BSS_MAX_SIZE=0x400 +CONFIG_SPL_BSS_MAX_SIZE=0x600 CONFIG_SYS_BOOTM_LEN=0x8000000 CONFIG_SYS_LOAD_ADDR=0x50000000 CONFIG_SPL=y diff --git a/configs/imx8mp_dhsom.config b/configs/imx8mp_dhsom.config index 3980c410266..cdabdbd3859 100644 --- a/configs/imx8mp_dhsom.config +++ b/configs/imx8mp_dhsom.config @@ -28,7 +28,7 @@ CONFIG_USE_PREBOOT=y CONFIG_FIT_EXTERNAL_OFFSET=0x3000 CONFIG_SPL_BOARD_INIT=y CONFIG_SPL_BOOTROM_SUPPORT=y -CONFIG_SPL_BSS_MAX_SIZE=0x400 +CONFIG_SPL_BSS_MAX_SIZE=0x600 CONFIG_SPL_BSS_START_ADDR=0x96fc00 CONFIG_SPL_CUSTOM_SYS_MALLOC_ADDR=0x4c000000 CONFIG_SPL_DM=y diff --git a/configs/imx8mp_venice_defconfig b/configs/imx8mp_venice_defconfig index 39b82063537..5365493ef30 100644 --- a/configs/imx8mp_venice_defconfig +++ b/configs/imx8mp_venice_defconfig @@ -20,7 +20,7 @@ CONFIG_SPL_STACK=0x960000 CONFIG_SPL_TEXT_BASE=0x920000 CONFIG_SPL_HAS_BSS_LINKER_SECTION=y CONFIG_SPL_BSS_START_ADDR=0x98fc00 -CONFIG_SPL_BSS_MAX_SIZE=0x400 +CONFIG_SPL_BSS_MAX_SIZE=0x600 CONFIG_SYS_BOOTM_LEN=0x10000000 CONFIG_SYS_LOAD_ADDR=0x40480000 CONFIG_SPL=y diff --git a/configs/venice2_defconfig b/configs/venice2_defconfig index 3d80197ef38..a832f324dce 100644 --- a/configs/venice2_defconfig +++ b/configs/venice2_defconfig @@ -59,3 +59,4 @@ CONFIG_USB_ETHER_ASIX=y CONFIG_USB_GADGET=y CONFIG_CI_UDC=y CONFIG_USB_GADGET_DOWNLOAD=y +CONFIG_SPL_MAX_SIZE=0x28000 From patchwork Mon Dec 1 17:04:43 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 781 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608882; bh=1bDTovj55HpmtaHkoMAdwSvzonfZ9MC3OXFFq8VexyE=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=B9qBtKOfJUc8Fr/Xg05HIIAxKDm/O+Ypw81xHn4fVNknLotC4iXPfGN3pEUHJH9+N ohtggJC2gWU8ozLN+NMQg1XfES2Beyi2XqemwVYlpb77UUdU7jBl/Ks59Xie0EEt6i p2RB7gT5ZeS9JDtLN3a4vKm26sN6zVzwZh9CxozvwISgYVBFNdeKiByFguKTKg2dlj K/RwkvCyyOXAgEbRnzQq/jvWja4+57M0zJrRD3Ta0jVjPC5iuE3IEQz3ZV+812RnOh VOK/qaZl3f2kolVbd3jvS1gro2t8oL1sr8xQfGX55wLwWDHKeUUVi1LM4qAeC2gXMi W7d6rdhQ3ciOQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 50F3A688C2 for ; Mon, 1 Dec 2025 10:08:02 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 23EA5bTGM1qS for ; Mon, 1 Dec 2025 10:08:02 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608880; bh=1bDTovj55HpmtaHkoMAdwSvzonfZ9MC3OXFFq8VexyE=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=fsakBZi4UbVtAMmF9FDdBwkYELnsG3d7TkbhwKVo8Wc5UN+IgYoiDGGUVtxmo+XF/ ZsP+JqtTH/77bnoEVNvz0qf+8354daNjx+4OAWZ1RkdenDFqpQxls2oNXatDo7uyy7 7KrcSlDIg9zOGD7wBlMk94V2GzYuaMoJqOjR7AnEfWi0I4oH5lnDLTqNVJwyjrH55p 3uTmrfBmXwEaFM9uiYVap1LbXVMAqqgMJ5UHvkJ5TeODDIe+rtuxdGqqVSybDBrEkb kNZQeOWcyxRA5hlf8/Qbpvk3ZW2k5tvgrc6oeJbzYzRzj3hx3pkYrp1gkj1beyKnsB nEK+B5dycWKEw== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 5720A688C5 for ; Mon, 1 Dec 2025 10:08:00 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608878; bh=e/CX3Ehj1KBmKxt+ul0vqSviQNWnItYJ54HHn5VcrOQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pMaiiksGHPSYCsqVWLbRVZdETZMpWsVPkYdVmcbfP4cyolfyM6UmFCithZ6jAgsaB YOzFsiTLnQBl92xfEiA4s9IzWqc1omlLmqlORjetUu0wwkzfL5jgSQTsnx3SyjPrbv 7O98wwQ0hUcVRPmaBuDCnJT7mTnuMf9Z7xD0XOc2RNJVrQ63pN7QzhAoaz9q+SNkEt PnR3dlHr0yEc+7wDfUJUOc7L/oG6rJja2zqkVoT0qYeERUpUvV9p0fRrtR6yNyi5LY DGJhdO67cUncu9sxhKh5tbQUj2hDL2wZ70t4JrRr1wvd7kzyzecGnFAJxuFdFRro9U n74Bn3OoUqhxg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 34B9668891; Mon, 1 Dec 2025 10:07:58 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id flGAEEQEdGQq; Mon, 1 Dec 2025 10:07:58 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608877; bh=hf3OPrVsxTYYS5ighbns4tN5VcuZ+R7lpugK+iGm+iU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=TYuKaZ+OL0+UzWoMPAXORUgtyEs+Teu4Az7C0k7j6HsnqHtEzI4sw+jA8prvjfSJm lz68uzXfyV3AC1cEVa+/V+R89GmXS1IHrID1iXewiZLmpWoI3O0uOyENvuuugI1HN4 qYEtBfGVk5mioD6lVeo65B26DiUC+1xFqor71aE6u6zGRgEQGJgUzQ83URMcsJYmLx WcSieCNO2wORDFoXMMBNgv5r8FiVa0I2Orbuny460FZ5c6L67tvLFRGHoLPmq+qC/z tfh5hkqTFN+jfntne18KTzpkNK6Yj543ierVmZXVlXGxhO6xHC3c98tNsGQWudRGzP XpdMKd/5s9Mpw== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 9133A6881A; Mon, 1 Dec 2025 10:07:57 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:43 -0700 Message-ID: <20251201170529.3237986-5-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: YSINU3EKISPRRK47JAF4AYS33DRMNQ5S X-Message-ID-Hash: YSINU3EKISPRRK47JAF4AYS33DRMNQ5S X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 04/37] test: Use TOTAL_MALLOC_LEN for abuf and alist tests List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Several tests use CONFIG_SYS_MALLOC_LEN to test allocations that should fail due to exceeding pool size. However, the actual malloc pool size is TOTAL_MALLOC_LEN, which includes CONFIG_ENV_SIZE for boards that need to store the environment in RAM. The extra space accommodates: - the hash table allocated via calloc() - strdup() calls for each environment variable key - strdup() calls for each environment variable value This is an estimate and typically consumes less than CONFIG_ENV_SIZE, leaving more free space in the malloc pool than was reserved. On qemu-x86_64, CONFIG_ENV_SIZE is 0x40000, making the actual pool 0x240000 bytes. Tests expecting malloc(CONFIG_SYS_MALLOC_LEN) to fail might unexpectedly succeed since there's more space available. Update all tests to use TOTAL_MALLOC_LEN to correctly reflect the actual malloc pool size. Co-developed-by: Claude --- test/lib/abuf.c | 5 +++-- test/lib/alist.c | 3 ++- 2 files changed, 5 insertions(+), 3 deletions(-) diff --git a/test/lib/abuf.c b/test/lib/abuf.c index 9cbb627d0b6..e97bb8b66bc 100644 --- a/test/lib/abuf.c +++ b/test/lib/abuf.c @@ -5,6 +5,7 @@ */ #include +#include #include #include #include @@ -244,7 +245,7 @@ static int lib_test_abuf_large(struct unit_test_state *uts) /* Try an impossible size */ abuf_init(&buf); - ut_asserteq(false, abuf_realloc(&buf, CONFIG_SYS_MALLOC_LEN)); + ut_asserteq(false, abuf_realloc(&buf, TOTAL_MALLOC_LEN)); ut_assertnull(buf.data); ut_asserteq(0, buf.size); ut_asserteq(false, buf.alloced); @@ -264,7 +265,7 @@ static int lib_test_abuf_large(struct unit_test_state *uts) ut_assert(delta > 0); /* try to increase it */ - ut_asserteq(false, abuf_realloc(&buf, CONFIG_SYS_MALLOC_LEN)); + ut_asserteq(false, abuf_realloc(&buf, TOTAL_MALLOC_LEN)); ut_asserteq_ptr(ptr, buf.data); ut_asserteq(TEST_DATA_LEN, buf.size); ut_asserteq(true, buf.alloced); diff --git a/test/lib/alist.c b/test/lib/alist.c index 0bf24578d2e..108eaed8d92 100644 --- a/test/lib/alist.c +++ b/test/lib/alist.c @@ -5,6 +5,7 @@ */ #include +#include #include #include #include @@ -41,7 +42,7 @@ static int lib_test_alist_init(struct unit_test_state *uts) /* use an impossible size */ ut_asserteq(false, alist_init(&lst, obj_size, - CONFIG_SYS_MALLOC_LEN)); + TOTAL_MALLOC_LEN)); ut_assertnull(lst.data); ut_asserteq(0, lst.count); ut_asserteq(0, lst.alloc); From patchwork Mon Dec 1 17:04:44 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 782 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608882; bh=qDgS7vTfPy5tLllr/ffZ9ouV3tG1sMsYnHzPzSm6eLU=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=gzqSNbBXt9e6mbKxSifGMbuPhe4HxEeecX8n5lM/mmfS181niLeIboJG0zN9JO5co Z9UYkoqWPSct4muu4IaYv1fUro1k3p7sFNhDctYOqOndr1zfzWP9aM+xmCxDOZX37w 8YG1QGqjNHDUZ3DrkNS77QofuKxEM83qf7HQ/cLa6Efs00xVgv6AxuWOwWurlcPrf0 4PxiRntW7044XWFL3eSMwhRZeqGtJ9mRdKzNbQSnDiHZpMFYyAFw6cdYG2CXqrS6gb 2+IeX7k8U/Es91MmuwbuH2z6yfiUre9kp1wGgLoiZA2ovagOCRej2rH8W1B2EA6Ss0 I0ZP1b0/svvkA== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id B16556881A for ; Mon, 1 Dec 2025 10:08:02 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id xg2iBJmae3AS for ; Mon, 1 Dec 2025 10:08:02 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608881; bh=qDgS7vTfPy5tLllr/ffZ9ouV3tG1sMsYnHzPzSm6eLU=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=FciHuyb1FAAFzu9zhYcOWsF6Dmwj1IZ76IMZCrcp0ZUg0vid03I5WAQ/QSI49YH68 Kaxus2f6Us07EwzJfTNKKHX41b0iksyfJnsEdP475RVoAy6IRh8NnfKUyhIUo+E52X UbdluskpzaBUV8wQWIliTHwP/LSQ08hLWQ2TcDessbYautby2IDiNcZrDbq21JiMXC naIRZPEEKHzEK+caMXAL9l5iFGaCshOWcnlKWkolVpWgkhG0QjybnAhNQDz6LzmkPf tCguGcTBSCaLcY6Z2Joxi7iRyD3EUxbjhJiSRA2MKfQkB0wK+cJa2KSGAulWTQQhPA g0Ssi5R2SEoUA== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 4AB16688CF for ; Mon, 1 Dec 2025 10:08:01 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608878; bh=Vee4OCNmgoH4GpTIYyXrISdvmZig6Jalko2+4D08sLI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y2JqQQvLnm4NAh1Td4+S7ky58DkcjwOx1c1pblWdeBswEibO7IedNBEBbCrBEVdP9 HlUCusTn01RFfmqTl94H2Y6Rt5pbi/syp3fox6wVEsBuIVEJQ/QyxtYy3qOZ/03/nT nLG+nU+fhlplwgp6gM/Fv+/C4XT37YWytUZE0p9aMZi/fY58tAQa/WJad7zknGykJx NX40VB3XKooD/WjYQFAr977oYvokhESP6n/yRCy1Fq+H13uINToGczgC4aVOG7jcUP gvnm8mmKLwG91/tqIEh8DcFATO9SPJuHNfPY3GnFUHrrQguTTjWk5VgUkkMuAwWE8G 0gnvu8FzZIX5A== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id F13F6688B3; Mon, 1 Dec 2025 10:07:58 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id g5-tET3OndSS; Mon, 1 Dec 2025 10:07:58 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608878; bh=gx1wL08frEkjpCWVHeJU6uPWlqvwdNhmLhuxSagI1SQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=UqSVh+hCmYm2T9M3eUKZO5xv7OovYnmy+aPdsU4ywiQ295lHfjMWMvmv036YRUAMg xCT6upeRq+V/mutibHLVX1FvIdNQer0PoDnLWvMmBl+GdgMTw44R/zlsOTGOFq2HNS 16lF4VVAyb6bDQg7KKU0LQE3b92EiQAQCJerDXeMasZkJevcvyRdtUqscVlmAia8fr /nqXGhKT8YNc2hoV69l6Y4m5QSUGIHU7zyT4y7cLqO8JQIRfIVrQuhPNtTp+Cc67CQ 8kADc+VARR4ISF31GayF72dzJkPIelrvcJq90KZn6o6cI8ZAPiPkPGVjFZ4T/rfrbf 49gT8t/h6EWJQ== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 7A86B6881A; Mon, 1 Dec 2025 10:07:58 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:44 -0700 Message-ID: <20251201170529.3237986-6-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: PVTDHKQDDAE5ARQWVWLWRDVDTHGR3QMJ X-Message-ID-Hash: PVTDHKQDDAE5ARQWVWLWRDVDTHGR3QMJ X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 05/37] malloc: Rename dlmalloc.c to dlmalloc_old.c List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Rename this file so that we can start to bring in the new one. Signed-off-by: Simon Glass --- common/Makefile | 2 +- common/{dlmalloc.c => dlmalloc_old.c} | 0 2 files changed, 1 insertion(+), 1 deletion(-) rename common/{dlmalloc.c => dlmalloc_old.c} (100%) diff --git a/common/Makefile b/common/Makefile index 7270af457f5..ffa46ce5e06 100644 --- a/common/Makefile +++ b/common/Makefile @@ -71,7 +71,7 @@ obj-$(CONFIG_BOUNCE_BUFFER) += bouncebuf.o obj-$(CONFIG_$(PHASE_)SERIAL) += console.o obj-$(CONFIG_CROS_EC) += cros_ec.o -obj-y += dlmalloc.o +obj-y += dlmalloc_old.o obj-$(CONFIG_$(PHASE_)SYS_MALLOC_F) += malloc_simple.o obj-$(CONFIG_$(PHASE_)CYCLIC) += cyclic.o diff --git a/common/dlmalloc.c b/common/dlmalloc_old.c similarity index 100% rename from common/dlmalloc.c rename to common/dlmalloc_old.c From patchwork Mon Dec 1 17:04:45 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 977 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1765999491; bh=MZs4CdKet2Lp3o9ZDhrONyzgX9gmNCtPiWi9AgwKBWQ=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=Wthww6FhkvEXmlYCtLBcCMXPVq5X8d6CC7xtmfGAjUOoCm++qzQFr9YqLVAVCu8B5 SWBPyrdKoVLI88TKTO5Ff6vQl46v27+zLe1+e62KD8o6+soSLsh7rdtdLit6Ym/Rvz wCSttj9Menbgn3xCFXVBnO7O3XCq+uHf7y+nZ+W9CtDMRLxWBimL08ibtkIBROgK81 Q6Orc78AnFWqQjtKcHABxTrs6bOFIxRGlKVJWo2be4wxWiY7Mll5toym9mmirw7k3p XNwrwuq/z74XR6A/fg2vOiffoesu0JkHP4qNPK/OIwsPEwe62Bn9pdpvK81OOOeYKo iOC4XFlRtHVPQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 89EB468B4E for ; Wed, 17 Dec 2025 12:24:51 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id IrR78E1xHH-M for ; Wed, 17 Dec 2025 12:24:51 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1765999489; bh=MZs4CdKet2Lp3o9ZDhrONyzgX9gmNCtPiWi9AgwKBWQ=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=U+FNqL3dZsfMUp/jt3+HYGakxYcljPNS2R6iifd5z83B1hHxJeBsIdzFxA0RGhLaC +pn5PLdxiHSsXyUBInAuNgyW/XuosqB+5OG9Pk7u6eEFsg28Izca40f66pVttInCRV JCuTytFtT7T9TnEiro/BYpKBAw1jm3LIyMydy4pgbYFvxPrQSv5CFH4IOprn8AMqQe rjt6zFKanZVm5v4s4onHz3EO8eAM/zHxQ5dp9ZjOt0aRWLkE4fWqyWgz+LBLkNW3Gz WUVvwzhveaiRCwl0+K+NfI2azl7z6G9L+or2KM2cGdjnNGoHYuVIHAzGxEXRPgB7GY VDsKu3HwOmkeA== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 9E84D68C09 for ; Wed, 17 Dec 2025 12:24:49 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608885; bh=5vL9DkExgmdOlyPWmmhIUylm6QYVg2YOLkhICyLo7FU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=OnZEH1LMt71vGxmFXSLVGkBLRK707qlmPxFuwr1Z5ipF8ztBok2KluiGAJ3Nr/2oZ Oe2YxQ3Vl4iHPWMDtJ8C4wQXKpeeKmcTgz1HecRojFjhp64QRb55V6J9ZdooC+cRtg C409dmyFC4/TOhts+2BiIUrEx+1kAOQHZ/AdQ0FSYR3S4YGSbXT5+9EK5eeUnk6l0+ UjFVF9tWxJZxXI67gJeTC09yjxO7AwckmS5T6FmazkucIJjBcBcQ7t/AGrftR8BBW2 KOHckMDadnZ683/ePtAPmzFZlAsk+8W02O5NxwUPeKlNnUFNL5wd0YoT13qFI1Vdbc SzNshzrtCv7IQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 3480B688B3; Mon, 1 Dec 2025 10:08:05 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id 3Cj7LeIr6uEe; Mon, 1 Dec 2025 10:08:05 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608879; bh=n2lVYBp6Q6hwpZ8RdRC8/nOR2Dgq99c9V+nnmMjL9Uk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=VRHxEgv7XtEQNqJWGf4OHA7cRJyG1e16CIPYbk8tFkb6ob0o70KwcYw6NROs0dCfS m+b22qXu/CE5mIt5zuH5X7VRxh/ACGGh3R4ix2qMqLh6IAK0uDige0btEIHmuTMkbu dKioXfXvyDGBJ2SBvMGSwTLr7RiFkvlYxlqhmQaHlxs9TlQytbfHABcvvY4M6DRqkA dbVIFo+E0PXR8ss5gGUMt/117yZtheSOCCqQ+zxZzcKrh7NCAN31imCfFXXmIWWXWU Ijl+eGkz1UQxOHIHemULipp7B11mcFlUOErvZCXZlHJ1rIHxXG03TlKJVpwv84IbWX n/ONXs8vNTVlQ== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 4324F5E439; Mon, 1 Dec 2025 10:07:59 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:45 -0700 Message-ID: <20251201170529.3237986-7-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Hits: max-size X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; news-moderation; no-subject; digests; suspicious-header Message-ID-Hash: EYOJYNMRFSXHQGKIDGA5MC7KBS5OZFWX X-Message-ID-Hash: EYOJYNMRFSXHQGKIDGA5MC7KBS5OZFWX X-Mailman-Approved-At: Wed, 17 Dec 2025 19:24:47 -0700 CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 06/37] malloc: Rename malloc.h to malloc_old.h List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Rename the existing malloc.h header to malloc_old.h and add a stub malloc.h that includes it. This prepares for importing the new dlmalloc 2.8.6 header while still using the old implementation. No functional change is intended. Co-developed-by: Claude Signed-off-by: Simon Glass --- include/malloc.h | 998 +----------------------------------------- include/malloc_old.h | 999 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 1003 insertions(+), 994 deletions(-) create mode 100644 include/malloc_old.h diff --git a/include/malloc.h b/include/malloc.h index 728451086aa..cb37752b86a 100644 --- a/include/malloc.h +++ b/include/malloc.h @@ -1,999 +1,9 @@ /* SPDX-License-Identifier: GPL-2.0+ */ /* - This code is based on a version of malloc/free/realloc written by Doug Lea and - released to the public domain. Send questions/comments/complaints/performance - data to dl@cs.oswego.edu - -* VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee) - - Note: There may be an updated version of this malloc obtainable at - http://g.oswego.edu/pub/misc/malloc.c - Check before installing! - -* Why use this malloc? - - This is not the fastest, most space-conserving, most portable, or - most tunable malloc ever written. However it is among the fastest - while also being among the most space-conserving, portable and tunable. - Consistent balance across these factors results in a good general-purpose - allocator. For a high-level description, see - http://g.oswego.edu/dl/html/malloc.html - -* Synopsis of public routines - - (Much fuller descriptions are contained in the program documentation below.) - - malloc(size_t n); - Return a pointer to a newly allocated chunk of at least n bytes, or null - if no space is available. - free(Void_t* p); - Release the chunk of memory pointed to by p, or no effect if p is null. - realloc(Void_t* p, size_t n); - Return a pointer to a chunk of size n that contains the same data - as does chunk p up to the minimum of (n, p's size) bytes, or null - if no space is available. The returned pointer may or may not be - the same as p. If p is null, equivalent to malloc. Unless the - #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a - size argument of zero (re)allocates a minimum-sized chunk. - memalign(size_t alignment, size_t n); - Return a pointer to a newly allocated chunk of n bytes, aligned - in accord with the alignment argument, which must be a power of - two. - valloc(size_t n); - Equivalent to memalign(pagesize, n), where pagesize is the page - size of the system (or as near to this as can be figured out from - all the includes/defines below.) - pvalloc(size_t n); - Equivalent to valloc(minimum-page-that-holds(n)), that is, - round up n to nearest pagesize. - calloc(size_t unit, size_t quantity); - Returns a pointer to quantity * unit bytes, with all locations - set to zero. - cfree(Void_t* p); - Equivalent to free(p). - malloc_trim(size_t pad); - Release all but pad bytes of freed top-most memory back - to the system. Return 1 if successful, else 0. - malloc_usable_size(Void_t* p); - Report the number usable allocated bytes associated with allocated - chunk p. This may or may not report more bytes than were requested, - due to alignment and minimum size constraints. - malloc_stats(); - Prints brief summary statistics on stderr. - mallinfo() - Returns (by copy) a struct containing various summary statistics. - mallopt(int parameter_number, int parameter_value) - Changes one of the tunable parameters described below. Returns - 1 if successful in changing the parameter, else 0. - -* Vital statistics: - - Alignment: 8-byte - 8 byte alignment is currently hardwired into the design. This - seems to suffice for all current machines and C compilers. - - Assumed pointer representation: 4 or 8 bytes - Code for 8-byte pointers is untested by me but has worked - reliably by Wolfram Gloger, who contributed most of the - changes supporting this. - - Assumed size_t representation: 4 or 8 bytes - Note that size_t is allowed to be 4 bytes even if pointers are 8. - - Minimum overhead per allocated chunk: 4 or 8 bytes - Each malloced chunk has a hidden overhead of 4 bytes holding size - and status information. - - Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead) - 8-byte ptrs: 24/32 bytes (including, 4/8 overhead) - - When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte - ptrs but 4 byte size) or 24 (for 8/8) additional bytes are - needed; 4 (8) for a trailing size field - and 8 (16) bytes for free list pointers. Thus, the minimum - allocatable size is 16/24/32 bytes. - - Even a request for zero bytes (i.e., malloc(0)) returns a - pointer to something of the minimum allocatable size. - - Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes - 8-byte size_t: 2^63 - 16 bytes - - It is assumed that (possibly signed) size_t bit values suffice to - represent chunk sizes. `Possibly signed' is due to the fact - that `size_t' may be defined on a system as either a signed or - an unsigned type. To be conservative, values that would appear - as negative numbers are avoided. - Requests for sizes with a negative sign bit when the request - size is treaded as a long will return null. - - Maximum overhead wastage per allocated chunk: normally 15 bytes - - Alignnment demands, plus the minimum allocatable size restriction - make the normal worst-case wastage 15 bytes (i.e., up to 15 - more bytes will be allocated than were requested in malloc), with - two exceptions: - 1. Because requests for zero bytes allocate non-zero space, - the worst case wastage for a request of zero bytes is 24 bytes. - 2. For requests >= mmap_threshold that are serviced via - mmap(), the worst case wastage is 8 bytes plus the remainder - from a system page (the minimal mmap unit); typically 4096 bytes. - -* Limitations - - Here are some features that are NOT currently supported - - * No user-definable hooks for callbacks and the like. - * No automated mechanism for fully checking that all accesses - to malloced memory stay within their bounds. - * No support for compaction. - -* Synopsis of compile-time options: - - People have reported using previous versions of this malloc on all - versions of Unix, sometimes by tweaking some of the defines - below. It has been tested most extensively on Solaris and - Linux. It is also reported to work on WIN32 platforms. - People have also reported adapting this malloc for use in - stand-alone embedded systems. - - The implementation is in straight, hand-tuned ANSI C. Among other - consequences, it uses a lot of macros. Because of this, to be at - all usable, this code should be compiled using an optimizing compiler - (for example gcc -O2) that can simplify expressions and control - paths. - - __STD_C (default: derived from C compiler defines) - Nonzero if using ANSI-standard C compiler, a C++ compiler, or - a C compiler sufficiently close to ANSI to get away with it. - DEBUG (default: NOT defined) - Define to enable debugging. Adds fairly extensive assertion-based - checking to help track down memory errors, but noticeably slows down - execution. - REALLOC_ZERO_BYTES_FREES (default: NOT defined) - Define this if you think that realloc(p, 0) should be equivalent - to free(p). Otherwise, since malloc returns a unique pointer for - malloc(0), so does realloc(p, 0). - HAVE_MEMCPY (default: defined) - Define if you are not otherwise using ANSI STD C, but still - have memcpy and memset in your C library and want to use them. - Otherwise, simple internal versions are supplied. - USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise) - Define as 1 if you want the C library versions of memset and - memcpy called in realloc and calloc (otherwise macro versions are used). - At least on some platforms, the simple macro versions usually - outperform libc versions. - HAVE_MMAP (default: defined as 1) - Define to non-zero to optionally make malloc() use mmap() to - allocate very large blocks. - HAVE_MREMAP (default: defined as 0 unless Linux libc set) - Define to non-zero to optionally make realloc() use mremap() to - reallocate very large blocks. - malloc_getpagesize (default: derived from system #includes) - Either a constant or routine call returning the system page size. - HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined) - Optionally define if you are on a system with a /usr/include/malloc.h - that declares struct mallinfo. It is not at all necessary to - define this even if you do, but will ensure consistency. - INTERNAL_SIZE_T (default: size_t) - Define to a 32-bit type (probably `unsigned int') if you are on a - 64-bit machine, yet do not want or need to allow malloc requests of - greater than 2^31 to be handled. This saves space, especially for - very small chunks. - INTERNAL_LINUX_C_LIB (default: NOT defined) - Defined only when compiled as part of Linux libc. - Also note that there is some odd internal name-mangling via defines - (for example, internally, `malloc' is named `mALLOc') needed - when compiling in this case. These look funny but don't otherwise - affect anything. - WIN32 (default: undefined) - Define this on MS win (95, nt) platforms to compile in sbrk emulation. - LACKS_UNISTD_H (default: undefined if not WIN32) - Define this if your system does not have a . - LACKS_SYS_PARAM_H (default: undefined if not WIN32) - Define this if your system does not have a . - MORECORE (default: sbrk) - The name of the routine to call to obtain more memory from the system. - MORECORE_FAILURE (default: -1) - The value returned upon failure of MORECORE. - MORECORE_CLEARS (default 1) - true (1) if the routine mapped to MORECORE zeroes out memory (which - holds for sbrk). - DEFAULT_TRIM_THRESHOLD - DEFAULT_TOP_PAD - DEFAULT_MMAP_THRESHOLD - DEFAULT_MMAP_MAX - Default values of tunable parameters (described in detail below) - controlling interaction with host system routines (sbrk, mmap, etc). - These values may also be changed dynamically via mallopt(). The - preset defaults are those that give best performance for typical - programs/systems. - USE_DL_PREFIX (default: undefined) - Prefix all public routines with the string 'dl'. Useful to - quickly avoid procedure declaration conflicts and linker symbol - conflicts with existing memory allocation routines. - -*/ - - -#ifndef __MALLOC_H__ -#define __MALLOC_H__ - -/* Preliminaries */ - -#ifndef __STD_C -#ifdef __STDC__ -#define __STD_C 1 -#else -#if __cplusplus -#define __STD_C 1 -#else -#define __STD_C 0 -#endif /*__cplusplus*/ -#endif /*__STDC__*/ -#endif /*__STD_C*/ - -#ifndef Void_t -#if (__STD_C || defined(WIN32)) -#define Void_t void -#else -#define Void_t char -#endif -#endif /*Void_t*/ - -#if __STD_C -#include /* for size_t */ -#else -#include -#endif /* __STD_C */ - -#ifdef __cplusplus -extern "C" { -#endif - -#if 0 /* not for U-Boot */ -#include /* needed for malloc_stats */ -#endif - -/* - Compile-time options -*/ - -/* - Debugging: - - Because freed chunks may be overwritten with link fields, this - malloc will often die when freed memory is overwritten by user - programs. This can be very effective (albeit in an annoying way) - in helping track down dangling pointers. - - If you compile with -DDEBUG, a number of assertion checks are - enabled that will catch more memory errors. You probably won't be - able to make much sense of the actual assertion errors, but they - should help you locate incorrectly overwritten memory. The - checking is fairly extensive, and will slow down execution - noticeably. Calling malloc_stats or mallinfo with DEBUG set will - attempt to check every non-mmapped allocated and free chunk in the - course of computing the summmaries. (By nature, mmapped regions - cannot be checked very much automatically.) - - Setting DEBUG may also be helpful if you are trying to modify - this code. The assertions in the check routines spell out in more - detail the assumptions and invariants underlying the algorithms. - -*/ - -/* - INTERNAL_SIZE_T is the word-size used for internal bookkeeping - of chunk sizes. On a 64-bit machine, you can reduce malloc - overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' - at the expense of not being able to handle requests greater than - 2^31. This limitation is hardly ever a concern; you are encouraged - to set this. However, the default version is the same as size_t. -*/ - -#ifndef INTERNAL_SIZE_T -#define INTERNAL_SIZE_T size_t -#endif - -/* - REALLOC_ZERO_BYTES_FREES should be set if a call to - realloc with zero bytes should be the same as a call to free. - Some people think it should. Otherwise, since this malloc - returns a unique pointer for malloc(0), so does realloc(p, 0). -*/ - -/* #define REALLOC_ZERO_BYTES_FREES */ - -/* - WIN32 causes an emulation of sbrk to be compiled in - mmap-based options are not currently supported in WIN32. -*/ - -/* #define WIN32 */ -#ifdef WIN32 -#define MORECORE wsbrk -#define HAVE_MMAP 0 - -#define LACKS_UNISTD_H -#define LACKS_SYS_PARAM_H - -/* - Include 'windows.h' to get the necessary declarations for the - Microsoft Visual C++ data structures and routines used in the 'sbrk' - emulation. - - Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft - Visual C++ header files are included. -*/ -#define WIN32_LEAN_AND_MEAN -#include -#endif - -/* - HAVE_MEMCPY should be defined if you are not otherwise using - ANSI STD C, but still have memcpy and memset in your C library - and want to use them in calloc and realloc. Otherwise simple - macro versions are defined here. - - USE_MEMCPY should be defined as 1 if you actually want to - have memset and memcpy called. People report that the macro - versions are often enough faster than libc versions on many - systems that it is better to use them. - -*/ - -#define HAVE_MEMCPY - -#ifndef USE_MEMCPY -#ifdef HAVE_MEMCPY -#define USE_MEMCPY 1 -#else -#define USE_MEMCPY 0 -#endif -#endif - -#if (__STD_C || defined(HAVE_MEMCPY)) - -#if __STD_C -/* U-Boot defines memset() and memcpy in /include/linux/string.h -void* memset(void*, int, size_t); -void* memcpy(void*, const void*, size_t); -*/ -#include -#else -#ifdef WIN32 -/* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */ -/* 'windows.h' */ -#else -Void_t* memset(); -Void_t* memcpy(); -#endif -#endif -#endif - -#if USE_MEMCPY - -/* The following macros are only invoked with (2n+1)-multiples of - INTERNAL_SIZE_T units, with a positive integer n. This is exploited - for fast inline execution when n is small. */ - -#define MALLOC_ZERO(charp, nbytes) \ -do { \ - INTERNAL_SIZE_T mzsz = (nbytes); \ - if(mzsz <= 9*sizeof(mzsz)) { \ - INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \ - if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \ - *mz++ = 0; \ - if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \ - *mz++ = 0; \ - if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \ - *mz++ = 0; }}} \ - *mz++ = 0; \ - *mz++ = 0; \ - *mz = 0; \ - } else memset((charp), 0, mzsz); \ -} while(0) - -#define MALLOC_COPY(dest,src,nbytes) \ -do { \ - INTERNAL_SIZE_T mcsz = (nbytes); \ - if(mcsz <= 9*sizeof(mcsz)) { \ - INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \ - INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \ - if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ - *mcdst++ = *mcsrc++; \ - if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ - *mcdst++ = *mcsrc++; \ - if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ - *mcdst++ = *mcsrc++; }}} \ - *mcdst++ = *mcsrc++; \ - *mcdst++ = *mcsrc++; \ - *mcdst = *mcsrc ; \ - } else memcpy(dest, src, mcsz); \ -} while(0) - -#else /* !USE_MEMCPY */ - -/* Use Duff's device for good zeroing/copying performance. */ - -#define MALLOC_ZERO(charp, nbytes) \ -do { \ - INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \ - long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \ - if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ - switch (mctmp) { \ - case 0: for(;;) { *mzp++ = 0; \ - case 7: *mzp++ = 0; \ - case 6: *mzp++ = 0; \ - case 5: *mzp++ = 0; \ - case 4: *mzp++ = 0; \ - case 3: *mzp++ = 0; \ - case 2: *mzp++ = 0; \ - case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \ - } \ -} while(0) - -#define MALLOC_COPY(dest,src,nbytes) \ -do { \ - INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \ - INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \ - long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \ - if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ - switch (mctmp) { \ - case 0: for(;;) { *mcdst++ = *mcsrc++; \ - case 7: *mcdst++ = *mcsrc++; \ - case 6: *mcdst++ = *mcsrc++; \ - case 5: *mcdst++ = *mcsrc++; \ - case 4: *mcdst++ = *mcsrc++; \ - case 3: *mcdst++ = *mcsrc++; \ - case 2: *mcdst++ = *mcsrc++; \ - case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \ - } \ -} while(0) - -#endif - -/* - Define HAVE_MMAP to optionally make malloc() use mmap() to - allocate very large blocks. These will be returned to the - operating system immediately after a free(). -*/ - -/*** -#ifndef HAVE_MMAP -#define HAVE_MMAP 1 -#endif -***/ -#undef HAVE_MMAP /* Not available for U-Boot */ - -/* - Define HAVE_MREMAP to make realloc() use mremap() to re-allocate - large blocks. This is currently only possible on Linux with - kernel versions newer than 1.3.77. -*/ - -/*** -#ifndef HAVE_MREMAP -#ifdef INTERNAL_LINUX_C_LIB -#define HAVE_MREMAP 1 -#else -#define HAVE_MREMAP 0 -#endif -#endif -***/ -#undef HAVE_MREMAP /* Not available for U-Boot */ - -#ifdef HAVE_MMAP - -#include -#include -#include - -#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) -#define MAP_ANONYMOUS MAP_ANON -#endif - -#endif /* HAVE_MMAP */ - -/* - Access to system page size. To the extent possible, this malloc - manages memory from the system in page-size units. - - The following mechanics for getpagesize were adapted from - bsd/gnu getpagesize.h -*/ - -#define LACKS_UNISTD_H /* Shortcut for U-Boot */ -#define malloc_getpagesize 4096 - -#ifndef LACKS_UNISTD_H -# include -#endif - -#ifndef malloc_getpagesize -# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ -# ifndef _SC_PAGE_SIZE -# define _SC_PAGE_SIZE _SC_PAGESIZE -# endif -# endif -# ifdef _SC_PAGE_SIZE -# define malloc_getpagesize sysconf(_SC_PAGE_SIZE) -# else -# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) - extern size_t getpagesize(); -# define malloc_getpagesize getpagesize() -# else -# ifdef WIN32 -# define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */ -# else -# ifndef LACKS_SYS_PARAM_H -# include -# endif -# ifdef EXEC_PAGESIZE -# define malloc_getpagesize EXEC_PAGESIZE -# else -# ifdef NBPG -# ifndef CLSIZE -# define malloc_getpagesize NBPG -# else -# define malloc_getpagesize (NBPG * CLSIZE) -# endif -# else -# ifdef NBPC -# define malloc_getpagesize NBPC -# else -# ifdef PAGESIZE -# define malloc_getpagesize PAGESIZE -# else -# define malloc_getpagesize (4096) /* just guess */ -# endif -# endif -# endif -# endif -# endif -# endif -# endif -#endif - -/* - - This version of malloc supports the standard SVID/XPG mallinfo - routine that returns a struct containing the same kind of - information you can get from malloc_stats. It should work on - any SVID/XPG compliant system that has a /usr/include/malloc.h - defining struct mallinfo. (If you'd like to install such a thing - yourself, cut out the preliminary declarations as described above - and below and save them in a malloc.h file. But there's no - compelling reason to bother to do this.) - - The main declaration needed is the mallinfo struct that is returned - (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a - bunch of fields, most of which are not even meaningful in this - version of malloc. Some of these fields are are instead filled by - mallinfo() with other numbers that might possibly be of interest. - - HAVE_USR_INCLUDE_MALLOC_H should be set if you have a - /usr/include/malloc.h file that includes a declaration of struct - mallinfo. If so, it is included; else an SVID2/XPG2 compliant - version is declared below. These must be precisely the same for - mallinfo() to work. - -*/ - -/* #define HAVE_USR_INCLUDE_MALLOC_H */ - -#ifdef HAVE_USR_INCLUDE_MALLOC_H -#include "/usr/include/malloc.h" -#else - -/* SVID2/XPG mallinfo structure */ - -struct mallinfo { - int arena; /* total space allocated from system */ - int ordblks; /* number of non-inuse chunks */ - int smblks; /* unused -- always zero */ - int hblks; /* number of mmapped regions */ - int hblkhd; /* total space in mmapped regions */ - int usmblks; /* unused -- always zero */ - int fsmblks; /* unused -- always zero */ - int uordblks; /* total allocated space */ - int fordblks; /* total non-inuse space */ - int keepcost; /* top-most, releasable (via malloc_trim) space */ -}; - -/* SVID2/XPG mallopt options */ - -#define M_MXFAST 1 /* UNUSED in this malloc */ -#define M_NLBLKS 2 /* UNUSED in this malloc */ -#define M_GRAIN 3 /* UNUSED in this malloc */ -#define M_KEEP 4 /* UNUSED in this malloc */ - -#endif - -/* mallopt options that actually do something */ - -#define M_TRIM_THRESHOLD -1 -#define M_TOP_PAD -2 -#define M_MMAP_THRESHOLD -3 -#define M_MMAP_MAX -4 - -#ifndef DEFAULT_TRIM_THRESHOLD -#define DEFAULT_TRIM_THRESHOLD (128 * 1024) -#endif - -/* - M_TRIM_THRESHOLD is the maximum amount of unused top-most memory - to keep before releasing via malloc_trim in free(). - - Automatic trimming is mainly useful in long-lived programs. - Because trimming via sbrk can be slow on some systems, and can - sometimes be wasteful (in cases where programs immediately - afterward allocate more large chunks) the value should be high - enough so that your overall system performance would improve by - releasing. - - The trim threshold and the mmap control parameters (see below) - can be traded off with one another. Trimming and mmapping are - two different ways of releasing unused memory back to the - system. Between these two, it is often possible to keep - system-level demands of a long-lived program down to a bare - minimum. For example, in one test suite of sessions measuring - the XF86 X server on Linux, using a trim threshold of 128K and a - mmap threshold of 192K led to near-minimal long term resource - consumption. - - If you are using this malloc in a long-lived program, it should - pay to experiment with these values. As a rough guide, you - might set to a value close to the average size of a process - (program) running on your system. Releasing this much memory - would allow such a process to run in memory. Generally, it's - worth it to tune for trimming rather tham memory mapping when a - program undergoes phases where several large chunks are - allocated and released in ways that can reuse each other's - storage, perhaps mixed with phases where there are no such - chunks at all. And in well-behaved long-lived programs, - controlling release of large blocks via trimming versus mapping - is usually faster. - - However, in most programs, these parameters serve mainly as - protection against the system-level effects of carrying around - massive amounts of unneeded memory. Since frequent calls to - sbrk, mmap, and munmap otherwise degrade performance, the default - parameters are set to relatively high values that serve only as - safeguards. - - The default trim value is high enough to cause trimming only in - fairly extreme (by current memory consumption standards) cases. - It must be greater than page size to have any useful effect. To - disable trimming completely, you can set to (unsigned long)(-1); - -*/ - -#ifndef DEFAULT_TOP_PAD -#define DEFAULT_TOP_PAD (0) -#endif - -/* - M_TOP_PAD is the amount of extra `padding' space to allocate or - retain whenever sbrk is called. It is used in two ways internally: - - * When sbrk is called to extend the top of the arena to satisfy - a new malloc request, this much padding is added to the sbrk - request. - - * When malloc_trim is called automatically from free(), - it is used as the `pad' argument. - - In both cases, the actual amount of padding is rounded - so that the end of the arena is always a system page boundary. - - The main reason for using padding is to avoid calling sbrk so - often. Having even a small pad greatly reduces the likelihood - that nearly every malloc request during program start-up (or - after trimming) will invoke sbrk, which needlessly wastes - time. - - Automatic rounding-up to page-size units is normally sufficient - to avoid measurable overhead, so the default is 0. However, in - systems where sbrk is relatively slow, it can pay to increase - this value, at the expense of carrying around more memory than - the program needs. - -*/ - -#ifndef DEFAULT_MMAP_THRESHOLD -#define DEFAULT_MMAP_THRESHOLD (128 * 1024) -#endif - -/* - - M_MMAP_THRESHOLD is the request size threshold for using mmap() - to service a request. Requests of at least this size that cannot - be allocated using already-existing space will be serviced via mmap. - (If enough normal freed space already exists it is used instead.) - - Using mmap segregates relatively large chunks of memory so that - they can be individually obtained and released from the host - system. A request serviced through mmap is never reused by any - other request (at least not directly; the system may just so - happen to remap successive requests to the same locations). - - Segregating space in this way has the benefit that mmapped space - can ALWAYS be individually released back to the system, which - helps keep the system level memory demands of a long-lived - program low. Mapped memory can never become `locked' between - other chunks, as can happen with normally allocated chunks, which - menas that even trimming via malloc_trim would not release them. - - However, it has the disadvantages that: - - 1. The space cannot be reclaimed, consolidated, and then - used to service later requests, as happens with normal chunks. - 2. It can lead to more wastage because of mmap page alignment - requirements - 3. It causes malloc performance to be more dependent on host - system memory management support routines which may vary in - implementation quality and may impose arbitrary - limitations. Generally, servicing a request via normal - malloc steps is faster than going through a system's mmap. - - All together, these considerations should lead you to use mmap - only for relatively large requests. - -*/ - -#ifndef DEFAULT_MMAP_MAX -#ifdef HAVE_MMAP -#define DEFAULT_MMAP_MAX (64) -#else -#define DEFAULT_MMAP_MAX (0) -#endif -#endif - -/* - M_MMAP_MAX is the maximum number of requests to simultaneously - service using mmap. This parameter exists because: - - 1. Some systems have a limited number of internal tables for - use by mmap. - 2. In most systems, overreliance on mmap can degrade overall - performance. - 3. If a program allocates many large regions, it is probably - better off using normal sbrk-based allocation routines that - can reclaim and reallocate normal heap memory. Using a - small value allows transition into this mode after the - first few allocations. - - Setting to 0 disables all use of mmap. If HAVE_MMAP is not set, - the default value is 0, and attempts to set it to non-zero values - in mallopt will fail. -*/ - -/* - USE_DL_PREFIX will prefix all public routines with the string 'dl'. - Useful to quickly avoid procedure declaration conflicts and linker - symbol conflicts with existing memory allocation routines. - -*/ - -/* - * Rename the U-Boot alloc functions so that sandbox can still use the system - * ones - */ -#ifdef CONFIG_SANDBOX -#define USE_DL_PREFIX -#endif - -/* - - Special defines for linux libc - - Except when compiled using these special defines for Linux libc - using weak aliases, this malloc is NOT designed to work in - multithreaded applications. No semaphores or other concurrency - control are provided to ensure that multiple malloc or free calls - don't run at the same time, which could be disasterous. A single - semaphore could be used across malloc, realloc, and free (which is - essentially the effect of the linux weak alias approach). It would - be hard to obtain finer granularity. - -*/ - -#ifdef INTERNAL_LINUX_C_LIB - -#if __STD_C - -Void_t * __default_morecore_init (ptrdiff_t); -Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init; - -#else - -Void_t * __default_morecore_init (); -Void_t *(*__morecore)() = __default_morecore_init; - -#endif - -#define MORECORE (*__morecore) -#define MORECORE_FAILURE 0 -#define MORECORE_CLEARS 1 - -#else /* INTERNAL_LINUX_C_LIB */ - -#if __STD_C -extern Void_t* sbrk(ptrdiff_t); -#else -extern Void_t* sbrk(); -#endif - -#ifndef MORECORE -#define MORECORE sbrk -#endif - -#ifndef MORECORE_FAILURE -#define MORECORE_FAILURE -1 -#endif - -#ifndef MORECORE_CLEARS -#define MORECORE_CLEARS 1 -#endif - -#endif /* INTERNAL_LINUX_C_LIB */ - -#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__) - -#define cALLOc __libc_calloc -#define fREe __libc_free -#define mALLOc __libc_malloc -#define mEMALIGn __libc_memalign -#define rEALLOc __libc_realloc -#define vALLOc __libc_valloc -#define pvALLOc __libc_pvalloc -#define mALLINFo __libc_mallinfo -#define mALLOPt __libc_mallopt - -#pragma weak calloc = __libc_calloc -#pragma weak free = __libc_free -#pragma weak cfree = __libc_free -#pragma weak malloc = __libc_malloc -#pragma weak memalign = __libc_memalign -#pragma weak realloc = __libc_realloc -#pragma weak valloc = __libc_valloc -#pragma weak pvalloc = __libc_pvalloc -#pragma weak mallinfo = __libc_mallinfo -#pragma weak mallopt = __libc_mallopt - -#else - -void malloc_simple_info(void); - -/** - * malloc_enable_testing() - Put malloc() into test mode + * Stub header to include the old malloc header * - * This only works if UNIT_TESTING is enabled - * - * @max_allocs: return -ENOMEM after max_allocs calls to malloc() + * This allows the old malloc implementation to be preserved while + * preparing for a new dlmalloc version. */ -void malloc_enable_testing(int max_allocs); - -/** malloc_disable_testing() - Put malloc() into normal mode */ -void malloc_disable_testing(void); - -#if CONFIG_IS_ENABLED(SYS_MALLOC_SIMPLE) -#define malloc malloc_simple -#define realloc realloc_simple -#define calloc calloc_simple -#define memalign memalign_simple -#if IS_ENABLED(CONFIG_VALGRIND) -#define free free_simple -#else -static inline void free(void *ptr) {} -#endif -void *calloc(size_t nmemb, size_t size); -void *realloc_simple(void *ptr, size_t size); -#else - -# ifdef USE_DL_PREFIX -# define cALLOc dlcalloc -# define fREe dlfree -# define mALLOc dlmalloc -# define mEMALIGn dlmemalign -# define rEALLOc dlrealloc -# define vALLOc dlvalloc -# define pvALLOc dlpvalloc -# define mALLINFo dlmallinfo -# define mALLOPt dlmallopt - -/* Ensure that U-Boot actually uses these too */ -#define calloc dlcalloc -#define free(ptr) dlfree(ptr) -#define malloc(x) dlmalloc(x) -#define memalign dlmemalign -#define realloc dlrealloc -#define valloc dlvalloc -#define pvalloc dlpvalloc -#define mallinfo() dlmallinfo() -#define mallopt dlmallopt -#define malloc_trim dlmalloc_trim -#define malloc_usable_size dlmalloc_usable_size -#define malloc_stats dlmalloc_stats - -# else /* USE_DL_PREFIX */ -# define cALLOc calloc -# define fREe free -# define mALLOc malloc -# define mEMALIGn memalign -# define rEALLOc realloc -# define vALLOc valloc -# define pvALLOc pvalloc -# define mALLINFo mallinfo -# define mALLOPt mallopt -# endif /* USE_DL_PREFIX */ - -#endif - -/* Set up pre-relocation malloc() ready for use */ -int initf_malloc(void); - -/* Public routines */ - -/* Simple versions which can be used when space is tight */ -void *malloc_simple(size_t size); -void *memalign_simple(size_t alignment, size_t bytes); - -#pragma GCC visibility push(hidden) -# if __STD_C - -Void_t* mALLOc(size_t); -void fREe(Void_t*); -Void_t* rEALLOc(Void_t*, size_t); -Void_t* mEMALIGn(size_t, size_t); -Void_t* vALLOc(size_t); -Void_t* pvALLOc(size_t); -Void_t* cALLOc(size_t, size_t); -void cfree(Void_t*); -int malloc_trim(size_t); -size_t malloc_usable_size(Void_t*); -void malloc_stats(void); -int mALLOPt(int, int); -struct mallinfo mALLINFo(void); -# else -Void_t* mALLOc(); -void fREe(); -Void_t* rEALLOc(); -Void_t* mEMALIGn(); -Void_t* vALLOc(); -Void_t* pvALLOc(); -Void_t* cALLOc(); -void cfree(); -int malloc_trim(); -size_t malloc_usable_size(); -void malloc_stats(); -int mALLOPt(); -struct mallinfo mALLINFo(); -# endif -#endif -#pragma GCC visibility pop - -/* - * Begin and End of memory area for malloc(), and current "brk" - */ -extern ulong mem_malloc_start; -extern ulong mem_malloc_end; -extern ulong mem_malloc_brk; - -/** - * mem_malloc_init() - Set up the malloc() pool - * - * Sets the region of memory to be used for all future calls to malloc(), etc. - * - * @start: Start address - * @size: Size in bytes - */ -void mem_malloc_init(ulong start, ulong size); - -#ifdef __cplusplus -}; /* end of extern "C" */ -#endif -#endif /* __MALLOC_H__ */ +#include diff --git a/include/malloc_old.h b/include/malloc_old.h new file mode 100644 index 00000000000..728451086aa --- /dev/null +++ b/include/malloc_old.h @@ -0,0 +1,999 @@ +/* SPDX-License-Identifier: GPL-2.0+ */ +/* + This code is based on a version of malloc/free/realloc written by Doug Lea and + released to the public domain. Send questions/comments/complaints/performance + data to dl@cs.oswego.edu + +* VERSION 2.6.6 Sun Mar 5 19:10:03 2000 Doug Lea (dl at gee) + + Note: There may be an updated version of this malloc obtainable at + http://g.oswego.edu/pub/misc/malloc.c + Check before installing! + +* Why use this malloc? + + This is not the fastest, most space-conserving, most portable, or + most tunable malloc ever written. However it is among the fastest + while also being among the most space-conserving, portable and tunable. + Consistent balance across these factors results in a good general-purpose + allocator. For a high-level description, see + http://g.oswego.edu/dl/html/malloc.html + +* Synopsis of public routines + + (Much fuller descriptions are contained in the program documentation below.) + + malloc(size_t n); + Return a pointer to a newly allocated chunk of at least n bytes, or null + if no space is available. + free(Void_t* p); + Release the chunk of memory pointed to by p, or no effect if p is null. + realloc(Void_t* p, size_t n); + Return a pointer to a chunk of size n that contains the same data + as does chunk p up to the minimum of (n, p's size) bytes, or null + if no space is available. The returned pointer may or may not be + the same as p. If p is null, equivalent to malloc. Unless the + #define REALLOC_ZERO_BYTES_FREES below is set, realloc with a + size argument of zero (re)allocates a minimum-sized chunk. + memalign(size_t alignment, size_t n); + Return a pointer to a newly allocated chunk of n bytes, aligned + in accord with the alignment argument, which must be a power of + two. + valloc(size_t n); + Equivalent to memalign(pagesize, n), where pagesize is the page + size of the system (or as near to this as can be figured out from + all the includes/defines below.) + pvalloc(size_t n); + Equivalent to valloc(minimum-page-that-holds(n)), that is, + round up n to nearest pagesize. + calloc(size_t unit, size_t quantity); + Returns a pointer to quantity * unit bytes, with all locations + set to zero. + cfree(Void_t* p); + Equivalent to free(p). + malloc_trim(size_t pad); + Release all but pad bytes of freed top-most memory back + to the system. Return 1 if successful, else 0. + malloc_usable_size(Void_t* p); + Report the number usable allocated bytes associated with allocated + chunk p. This may or may not report more bytes than were requested, + due to alignment and minimum size constraints. + malloc_stats(); + Prints brief summary statistics on stderr. + mallinfo() + Returns (by copy) a struct containing various summary statistics. + mallopt(int parameter_number, int parameter_value) + Changes one of the tunable parameters described below. Returns + 1 if successful in changing the parameter, else 0. + +* Vital statistics: + + Alignment: 8-byte + 8 byte alignment is currently hardwired into the design. This + seems to suffice for all current machines and C compilers. + + Assumed pointer representation: 4 or 8 bytes + Code for 8-byte pointers is untested by me but has worked + reliably by Wolfram Gloger, who contributed most of the + changes supporting this. + + Assumed size_t representation: 4 or 8 bytes + Note that size_t is allowed to be 4 bytes even if pointers are 8. + + Minimum overhead per allocated chunk: 4 or 8 bytes + Each malloced chunk has a hidden overhead of 4 bytes holding size + and status information. + + Minimum allocated size: 4-byte ptrs: 16 bytes (including 4 overhead) + 8-byte ptrs: 24/32 bytes (including, 4/8 overhead) + + When a chunk is freed, 12 (for 4byte ptrs) or 20 (for 8 byte + ptrs but 4 byte size) or 24 (for 8/8) additional bytes are + needed; 4 (8) for a trailing size field + and 8 (16) bytes for free list pointers. Thus, the minimum + allocatable size is 16/24/32 bytes. + + Even a request for zero bytes (i.e., malloc(0)) returns a + pointer to something of the minimum allocatable size. + + Maximum allocated size: 4-byte size_t: 2^31 - 8 bytes + 8-byte size_t: 2^63 - 16 bytes + + It is assumed that (possibly signed) size_t bit values suffice to + represent chunk sizes. `Possibly signed' is due to the fact + that `size_t' may be defined on a system as either a signed or + an unsigned type. To be conservative, values that would appear + as negative numbers are avoided. + Requests for sizes with a negative sign bit when the request + size is treaded as a long will return null. + + Maximum overhead wastage per allocated chunk: normally 15 bytes + + Alignnment demands, plus the minimum allocatable size restriction + make the normal worst-case wastage 15 bytes (i.e., up to 15 + more bytes will be allocated than were requested in malloc), with + two exceptions: + 1. Because requests for zero bytes allocate non-zero space, + the worst case wastage for a request of zero bytes is 24 bytes. + 2. For requests >= mmap_threshold that are serviced via + mmap(), the worst case wastage is 8 bytes plus the remainder + from a system page (the minimal mmap unit); typically 4096 bytes. + +* Limitations + + Here are some features that are NOT currently supported + + * No user-definable hooks for callbacks and the like. + * No automated mechanism for fully checking that all accesses + to malloced memory stay within their bounds. + * No support for compaction. + +* Synopsis of compile-time options: + + People have reported using previous versions of this malloc on all + versions of Unix, sometimes by tweaking some of the defines + below. It has been tested most extensively on Solaris and + Linux. It is also reported to work on WIN32 platforms. + People have also reported adapting this malloc for use in + stand-alone embedded systems. + + The implementation is in straight, hand-tuned ANSI C. Among other + consequences, it uses a lot of macros. Because of this, to be at + all usable, this code should be compiled using an optimizing compiler + (for example gcc -O2) that can simplify expressions and control + paths. + + __STD_C (default: derived from C compiler defines) + Nonzero if using ANSI-standard C compiler, a C++ compiler, or + a C compiler sufficiently close to ANSI to get away with it. + DEBUG (default: NOT defined) + Define to enable debugging. Adds fairly extensive assertion-based + checking to help track down memory errors, but noticeably slows down + execution. + REALLOC_ZERO_BYTES_FREES (default: NOT defined) + Define this if you think that realloc(p, 0) should be equivalent + to free(p). Otherwise, since malloc returns a unique pointer for + malloc(0), so does realloc(p, 0). + HAVE_MEMCPY (default: defined) + Define if you are not otherwise using ANSI STD C, but still + have memcpy and memset in your C library and want to use them. + Otherwise, simple internal versions are supplied. + USE_MEMCPY (default: 1 if HAVE_MEMCPY is defined, 0 otherwise) + Define as 1 if you want the C library versions of memset and + memcpy called in realloc and calloc (otherwise macro versions are used). + At least on some platforms, the simple macro versions usually + outperform libc versions. + HAVE_MMAP (default: defined as 1) + Define to non-zero to optionally make malloc() use mmap() to + allocate very large blocks. + HAVE_MREMAP (default: defined as 0 unless Linux libc set) + Define to non-zero to optionally make realloc() use mremap() to + reallocate very large blocks. + malloc_getpagesize (default: derived from system #includes) + Either a constant or routine call returning the system page size. + HAVE_USR_INCLUDE_MALLOC_H (default: NOT defined) + Optionally define if you are on a system with a /usr/include/malloc.h + that declares struct mallinfo. It is not at all necessary to + define this even if you do, but will ensure consistency. + INTERNAL_SIZE_T (default: size_t) + Define to a 32-bit type (probably `unsigned int') if you are on a + 64-bit machine, yet do not want or need to allow malloc requests of + greater than 2^31 to be handled. This saves space, especially for + very small chunks. + INTERNAL_LINUX_C_LIB (default: NOT defined) + Defined only when compiled as part of Linux libc. + Also note that there is some odd internal name-mangling via defines + (for example, internally, `malloc' is named `mALLOc') needed + when compiling in this case. These look funny but don't otherwise + affect anything. + WIN32 (default: undefined) + Define this on MS win (95, nt) platforms to compile in sbrk emulation. + LACKS_UNISTD_H (default: undefined if not WIN32) + Define this if your system does not have a . + LACKS_SYS_PARAM_H (default: undefined if not WIN32) + Define this if your system does not have a . + MORECORE (default: sbrk) + The name of the routine to call to obtain more memory from the system. + MORECORE_FAILURE (default: -1) + The value returned upon failure of MORECORE. + MORECORE_CLEARS (default 1) + true (1) if the routine mapped to MORECORE zeroes out memory (which + holds for sbrk). + DEFAULT_TRIM_THRESHOLD + DEFAULT_TOP_PAD + DEFAULT_MMAP_THRESHOLD + DEFAULT_MMAP_MAX + Default values of tunable parameters (described in detail below) + controlling interaction with host system routines (sbrk, mmap, etc). + These values may also be changed dynamically via mallopt(). The + preset defaults are those that give best performance for typical + programs/systems. + USE_DL_PREFIX (default: undefined) + Prefix all public routines with the string 'dl'. Useful to + quickly avoid procedure declaration conflicts and linker symbol + conflicts with existing memory allocation routines. + +*/ + + +#ifndef __MALLOC_H__ +#define __MALLOC_H__ + +/* Preliminaries */ + +#ifndef __STD_C +#ifdef __STDC__ +#define __STD_C 1 +#else +#if __cplusplus +#define __STD_C 1 +#else +#define __STD_C 0 +#endif /*__cplusplus*/ +#endif /*__STDC__*/ +#endif /*__STD_C*/ + +#ifndef Void_t +#if (__STD_C || defined(WIN32)) +#define Void_t void +#else +#define Void_t char +#endif +#endif /*Void_t*/ + +#if __STD_C +#include /* for size_t */ +#else +#include +#endif /* __STD_C */ + +#ifdef __cplusplus +extern "C" { +#endif + +#if 0 /* not for U-Boot */ +#include /* needed for malloc_stats */ +#endif + +/* + Compile-time options +*/ + +/* + Debugging: + + Because freed chunks may be overwritten with link fields, this + malloc will often die when freed memory is overwritten by user + programs. This can be very effective (albeit in an annoying way) + in helping track down dangling pointers. + + If you compile with -DDEBUG, a number of assertion checks are + enabled that will catch more memory errors. You probably won't be + able to make much sense of the actual assertion errors, but they + should help you locate incorrectly overwritten memory. The + checking is fairly extensive, and will slow down execution + noticeably. Calling malloc_stats or mallinfo with DEBUG set will + attempt to check every non-mmapped allocated and free chunk in the + course of computing the summmaries. (By nature, mmapped regions + cannot be checked very much automatically.) + + Setting DEBUG may also be helpful if you are trying to modify + this code. The assertions in the check routines spell out in more + detail the assumptions and invariants underlying the algorithms. + +*/ + +/* + INTERNAL_SIZE_T is the word-size used for internal bookkeeping + of chunk sizes. On a 64-bit machine, you can reduce malloc + overhead by defining INTERNAL_SIZE_T to be a 32 bit `unsigned int' + at the expense of not being able to handle requests greater than + 2^31. This limitation is hardly ever a concern; you are encouraged + to set this. However, the default version is the same as size_t. +*/ + +#ifndef INTERNAL_SIZE_T +#define INTERNAL_SIZE_T size_t +#endif + +/* + REALLOC_ZERO_BYTES_FREES should be set if a call to + realloc with zero bytes should be the same as a call to free. + Some people think it should. Otherwise, since this malloc + returns a unique pointer for malloc(0), so does realloc(p, 0). +*/ + +/* #define REALLOC_ZERO_BYTES_FREES */ + +/* + WIN32 causes an emulation of sbrk to be compiled in + mmap-based options are not currently supported in WIN32. +*/ + +/* #define WIN32 */ +#ifdef WIN32 +#define MORECORE wsbrk +#define HAVE_MMAP 0 + +#define LACKS_UNISTD_H +#define LACKS_SYS_PARAM_H + +/* + Include 'windows.h' to get the necessary declarations for the + Microsoft Visual C++ data structures and routines used in the 'sbrk' + emulation. + + Define WIN32_LEAN_AND_MEAN so that only the essential Microsoft + Visual C++ header files are included. +*/ +#define WIN32_LEAN_AND_MEAN +#include +#endif + +/* + HAVE_MEMCPY should be defined if you are not otherwise using + ANSI STD C, but still have memcpy and memset in your C library + and want to use them in calloc and realloc. Otherwise simple + macro versions are defined here. + + USE_MEMCPY should be defined as 1 if you actually want to + have memset and memcpy called. People report that the macro + versions are often enough faster than libc versions on many + systems that it is better to use them. + +*/ + +#define HAVE_MEMCPY + +#ifndef USE_MEMCPY +#ifdef HAVE_MEMCPY +#define USE_MEMCPY 1 +#else +#define USE_MEMCPY 0 +#endif +#endif + +#if (__STD_C || defined(HAVE_MEMCPY)) + +#if __STD_C +/* U-Boot defines memset() and memcpy in /include/linux/string.h +void* memset(void*, int, size_t); +void* memcpy(void*, const void*, size_t); +*/ +#include +#else +#ifdef WIN32 +/* On Win32 platforms, 'memset()' and 'memcpy()' are already declared in */ +/* 'windows.h' */ +#else +Void_t* memset(); +Void_t* memcpy(); +#endif +#endif +#endif + +#if USE_MEMCPY + +/* The following macros are only invoked with (2n+1)-multiples of + INTERNAL_SIZE_T units, with a positive integer n. This is exploited + for fast inline execution when n is small. */ + +#define MALLOC_ZERO(charp, nbytes) \ +do { \ + INTERNAL_SIZE_T mzsz = (nbytes); \ + if(mzsz <= 9*sizeof(mzsz)) { \ + INTERNAL_SIZE_T* mz = (INTERNAL_SIZE_T*) (charp); \ + if(mzsz >= 5*sizeof(mzsz)) { *mz++ = 0; \ + *mz++ = 0; \ + if(mzsz >= 7*sizeof(mzsz)) { *mz++ = 0; \ + *mz++ = 0; \ + if(mzsz >= 9*sizeof(mzsz)) { *mz++ = 0; \ + *mz++ = 0; }}} \ + *mz++ = 0; \ + *mz++ = 0; \ + *mz = 0; \ + } else memset((charp), 0, mzsz); \ +} while(0) + +#define MALLOC_COPY(dest,src,nbytes) \ +do { \ + INTERNAL_SIZE_T mcsz = (nbytes); \ + if(mcsz <= 9*sizeof(mcsz)) { \ + INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) (src); \ + INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) (dest); \ + if(mcsz >= 5*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ + *mcdst++ = *mcsrc++; \ + if(mcsz >= 7*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ + *mcdst++ = *mcsrc++; \ + if(mcsz >= 9*sizeof(mcsz)) { *mcdst++ = *mcsrc++; \ + *mcdst++ = *mcsrc++; }}} \ + *mcdst++ = *mcsrc++; \ + *mcdst++ = *mcsrc++; \ + *mcdst = *mcsrc ; \ + } else memcpy(dest, src, mcsz); \ +} while(0) + +#else /* !USE_MEMCPY */ + +/* Use Duff's device for good zeroing/copying performance. */ + +#define MALLOC_ZERO(charp, nbytes) \ +do { \ + INTERNAL_SIZE_T* mzp = (INTERNAL_SIZE_T*)(charp); \ + long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \ + if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ + switch (mctmp) { \ + case 0: for(;;) { *mzp++ = 0; \ + case 7: *mzp++ = 0; \ + case 6: *mzp++ = 0; \ + case 5: *mzp++ = 0; \ + case 4: *mzp++ = 0; \ + case 3: *mzp++ = 0; \ + case 2: *mzp++ = 0; \ + case 1: *mzp++ = 0; if(mcn <= 0) break; mcn--; } \ + } \ +} while(0) + +#define MALLOC_COPY(dest,src,nbytes) \ +do { \ + INTERNAL_SIZE_T* mcsrc = (INTERNAL_SIZE_T*) src; \ + INTERNAL_SIZE_T* mcdst = (INTERNAL_SIZE_T*) dest; \ + long mctmp = (nbytes)/sizeof(INTERNAL_SIZE_T), mcn; \ + if (mctmp < 8) mcn = 0; else { mcn = (mctmp-1)/8; mctmp %= 8; } \ + switch (mctmp) { \ + case 0: for(;;) { *mcdst++ = *mcsrc++; \ + case 7: *mcdst++ = *mcsrc++; \ + case 6: *mcdst++ = *mcsrc++; \ + case 5: *mcdst++ = *mcsrc++; \ + case 4: *mcdst++ = *mcsrc++; \ + case 3: *mcdst++ = *mcsrc++; \ + case 2: *mcdst++ = *mcsrc++; \ + case 1: *mcdst++ = *mcsrc++; if(mcn <= 0) break; mcn--; } \ + } \ +} while(0) + +#endif + +/* + Define HAVE_MMAP to optionally make malloc() use mmap() to + allocate very large blocks. These will be returned to the + operating system immediately after a free(). +*/ + +/*** +#ifndef HAVE_MMAP +#define HAVE_MMAP 1 +#endif +***/ +#undef HAVE_MMAP /* Not available for U-Boot */ + +/* + Define HAVE_MREMAP to make realloc() use mremap() to re-allocate + large blocks. This is currently only possible on Linux with + kernel versions newer than 1.3.77. +*/ + +/*** +#ifndef HAVE_MREMAP +#ifdef INTERNAL_LINUX_C_LIB +#define HAVE_MREMAP 1 +#else +#define HAVE_MREMAP 0 +#endif +#endif +***/ +#undef HAVE_MREMAP /* Not available for U-Boot */ + +#ifdef HAVE_MMAP + +#include +#include +#include + +#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) +#define MAP_ANONYMOUS MAP_ANON +#endif + +#endif /* HAVE_MMAP */ + +/* + Access to system page size. To the extent possible, this malloc + manages memory from the system in page-size units. + + The following mechanics for getpagesize were adapted from + bsd/gnu getpagesize.h +*/ + +#define LACKS_UNISTD_H /* Shortcut for U-Boot */ +#define malloc_getpagesize 4096 + +#ifndef LACKS_UNISTD_H +# include +#endif + +#ifndef malloc_getpagesize +# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ +# ifndef _SC_PAGE_SIZE +# define _SC_PAGE_SIZE _SC_PAGESIZE +# endif +# endif +# ifdef _SC_PAGE_SIZE +# define malloc_getpagesize sysconf(_SC_PAGE_SIZE) +# else +# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) + extern size_t getpagesize(); +# define malloc_getpagesize getpagesize() +# else +# ifdef WIN32 +# define malloc_getpagesize (4096) /* TBD: Use 'GetSystemInfo' instead */ +# else +# ifndef LACKS_SYS_PARAM_H +# include +# endif +# ifdef EXEC_PAGESIZE +# define malloc_getpagesize EXEC_PAGESIZE +# else +# ifdef NBPG +# ifndef CLSIZE +# define malloc_getpagesize NBPG +# else +# define malloc_getpagesize (NBPG * CLSIZE) +# endif +# else +# ifdef NBPC +# define malloc_getpagesize NBPC +# else +# ifdef PAGESIZE +# define malloc_getpagesize PAGESIZE +# else +# define malloc_getpagesize (4096) /* just guess */ +# endif +# endif +# endif +# endif +# endif +# endif +# endif +#endif + +/* + + This version of malloc supports the standard SVID/XPG mallinfo + routine that returns a struct containing the same kind of + information you can get from malloc_stats. It should work on + any SVID/XPG compliant system that has a /usr/include/malloc.h + defining struct mallinfo. (If you'd like to install such a thing + yourself, cut out the preliminary declarations as described above + and below and save them in a malloc.h file. But there's no + compelling reason to bother to do this.) + + The main declaration needed is the mallinfo struct that is returned + (by-copy) by mallinfo(). The SVID/XPG malloinfo struct contains a + bunch of fields, most of which are not even meaningful in this + version of malloc. Some of these fields are are instead filled by + mallinfo() with other numbers that might possibly be of interest. + + HAVE_USR_INCLUDE_MALLOC_H should be set if you have a + /usr/include/malloc.h file that includes a declaration of struct + mallinfo. If so, it is included; else an SVID2/XPG2 compliant + version is declared below. These must be precisely the same for + mallinfo() to work. + +*/ + +/* #define HAVE_USR_INCLUDE_MALLOC_H */ + +#ifdef HAVE_USR_INCLUDE_MALLOC_H +#include "/usr/include/malloc.h" +#else + +/* SVID2/XPG mallinfo structure */ + +struct mallinfo { + int arena; /* total space allocated from system */ + int ordblks; /* number of non-inuse chunks */ + int smblks; /* unused -- always zero */ + int hblks; /* number of mmapped regions */ + int hblkhd; /* total space in mmapped regions */ + int usmblks; /* unused -- always zero */ + int fsmblks; /* unused -- always zero */ + int uordblks; /* total allocated space */ + int fordblks; /* total non-inuse space */ + int keepcost; /* top-most, releasable (via malloc_trim) space */ +}; + +/* SVID2/XPG mallopt options */ + +#define M_MXFAST 1 /* UNUSED in this malloc */ +#define M_NLBLKS 2 /* UNUSED in this malloc */ +#define M_GRAIN 3 /* UNUSED in this malloc */ +#define M_KEEP 4 /* UNUSED in this malloc */ + +#endif + +/* mallopt options that actually do something */ + +#define M_TRIM_THRESHOLD -1 +#define M_TOP_PAD -2 +#define M_MMAP_THRESHOLD -3 +#define M_MMAP_MAX -4 + +#ifndef DEFAULT_TRIM_THRESHOLD +#define DEFAULT_TRIM_THRESHOLD (128 * 1024) +#endif + +/* + M_TRIM_THRESHOLD is the maximum amount of unused top-most memory + to keep before releasing via malloc_trim in free(). + + Automatic trimming is mainly useful in long-lived programs. + Because trimming via sbrk can be slow on some systems, and can + sometimes be wasteful (in cases where programs immediately + afterward allocate more large chunks) the value should be high + enough so that your overall system performance would improve by + releasing. + + The trim threshold and the mmap control parameters (see below) + can be traded off with one another. Trimming and mmapping are + two different ways of releasing unused memory back to the + system. Between these two, it is often possible to keep + system-level demands of a long-lived program down to a bare + minimum. For example, in one test suite of sessions measuring + the XF86 X server on Linux, using a trim threshold of 128K and a + mmap threshold of 192K led to near-minimal long term resource + consumption. + + If you are using this malloc in a long-lived program, it should + pay to experiment with these values. As a rough guide, you + might set to a value close to the average size of a process + (program) running on your system. Releasing this much memory + would allow such a process to run in memory. Generally, it's + worth it to tune for trimming rather tham memory mapping when a + program undergoes phases where several large chunks are + allocated and released in ways that can reuse each other's + storage, perhaps mixed with phases where there are no such + chunks at all. And in well-behaved long-lived programs, + controlling release of large blocks via trimming versus mapping + is usually faster. + + However, in most programs, these parameters serve mainly as + protection against the system-level effects of carrying around + massive amounts of unneeded memory. Since frequent calls to + sbrk, mmap, and munmap otherwise degrade performance, the default + parameters are set to relatively high values that serve only as + safeguards. + + The default trim value is high enough to cause trimming only in + fairly extreme (by current memory consumption standards) cases. + It must be greater than page size to have any useful effect. To + disable trimming completely, you can set to (unsigned long)(-1); + +*/ + +#ifndef DEFAULT_TOP_PAD +#define DEFAULT_TOP_PAD (0) +#endif + +/* + M_TOP_PAD is the amount of extra `padding' space to allocate or + retain whenever sbrk is called. It is used in two ways internally: + + * When sbrk is called to extend the top of the arena to satisfy + a new malloc request, this much padding is added to the sbrk + request. + + * When malloc_trim is called automatically from free(), + it is used as the `pad' argument. + + In both cases, the actual amount of padding is rounded + so that the end of the arena is always a system page boundary. + + The main reason for using padding is to avoid calling sbrk so + often. Having even a small pad greatly reduces the likelihood + that nearly every malloc request during program start-up (or + after trimming) will invoke sbrk, which needlessly wastes + time. + + Automatic rounding-up to page-size units is normally sufficient + to avoid measurable overhead, so the default is 0. However, in + systems where sbrk is relatively slow, it can pay to increase + this value, at the expense of carrying around more memory than + the program needs. + +*/ + +#ifndef DEFAULT_MMAP_THRESHOLD +#define DEFAULT_MMAP_THRESHOLD (128 * 1024) +#endif + +/* + + M_MMAP_THRESHOLD is the request size threshold for using mmap() + to service a request. Requests of at least this size that cannot + be allocated using already-existing space will be serviced via mmap. + (If enough normal freed space already exists it is used instead.) + + Using mmap segregates relatively large chunks of memory so that + they can be individually obtained and released from the host + system. A request serviced through mmap is never reused by any + other request (at least not directly; the system may just so + happen to remap successive requests to the same locations). + + Segregating space in this way has the benefit that mmapped space + can ALWAYS be individually released back to the system, which + helps keep the system level memory demands of a long-lived + program low. Mapped memory can never become `locked' between + other chunks, as can happen with normally allocated chunks, which + menas that even trimming via malloc_trim would not release them. + + However, it has the disadvantages that: + + 1. The space cannot be reclaimed, consolidated, and then + used to service later requests, as happens with normal chunks. + 2. It can lead to more wastage because of mmap page alignment + requirements + 3. It causes malloc performance to be more dependent on host + system memory management support routines which may vary in + implementation quality and may impose arbitrary + limitations. Generally, servicing a request via normal + malloc steps is faster than going through a system's mmap. + + All together, these considerations should lead you to use mmap + only for relatively large requests. + +*/ + +#ifndef DEFAULT_MMAP_MAX +#ifdef HAVE_MMAP +#define DEFAULT_MMAP_MAX (64) +#else +#define DEFAULT_MMAP_MAX (0) +#endif +#endif + +/* + M_MMAP_MAX is the maximum number of requests to simultaneously + service using mmap. This parameter exists because: + + 1. Some systems have a limited number of internal tables for + use by mmap. + 2. In most systems, overreliance on mmap can degrade overall + performance. + 3. If a program allocates many large regions, it is probably + better off using normal sbrk-based allocation routines that + can reclaim and reallocate normal heap memory. Using a + small value allows transition into this mode after the + first few allocations. + + Setting to 0 disables all use of mmap. If HAVE_MMAP is not set, + the default value is 0, and attempts to set it to non-zero values + in mallopt will fail. +*/ + +/* + USE_DL_PREFIX will prefix all public routines with the string 'dl'. + Useful to quickly avoid procedure declaration conflicts and linker + symbol conflicts with existing memory allocation routines. + +*/ + +/* + * Rename the U-Boot alloc functions so that sandbox can still use the system + * ones + */ +#ifdef CONFIG_SANDBOX +#define USE_DL_PREFIX +#endif + +/* + + Special defines for linux libc + + Except when compiled using these special defines for Linux libc + using weak aliases, this malloc is NOT designed to work in + multithreaded applications. No semaphores or other concurrency + control are provided to ensure that multiple malloc or free calls + don't run at the same time, which could be disasterous. A single + semaphore could be used across malloc, realloc, and free (which is + essentially the effect of the linux weak alias approach). It would + be hard to obtain finer granularity. + +*/ + +#ifdef INTERNAL_LINUX_C_LIB + +#if __STD_C + +Void_t * __default_morecore_init (ptrdiff_t); +Void_t *(*__morecore)(ptrdiff_t) = __default_morecore_init; + +#else + +Void_t * __default_morecore_init (); +Void_t *(*__morecore)() = __default_morecore_init; + +#endif + +#define MORECORE (*__morecore) +#define MORECORE_FAILURE 0 +#define MORECORE_CLEARS 1 + +#else /* INTERNAL_LINUX_C_LIB */ + +#if __STD_C +extern Void_t* sbrk(ptrdiff_t); +#else +extern Void_t* sbrk(); +#endif + +#ifndef MORECORE +#define MORECORE sbrk +#endif + +#ifndef MORECORE_FAILURE +#define MORECORE_FAILURE -1 +#endif + +#ifndef MORECORE_CLEARS +#define MORECORE_CLEARS 1 +#endif + +#endif /* INTERNAL_LINUX_C_LIB */ + +#if defined(INTERNAL_LINUX_C_LIB) && defined(__ELF__) + +#define cALLOc __libc_calloc +#define fREe __libc_free +#define mALLOc __libc_malloc +#define mEMALIGn __libc_memalign +#define rEALLOc __libc_realloc +#define vALLOc __libc_valloc +#define pvALLOc __libc_pvalloc +#define mALLINFo __libc_mallinfo +#define mALLOPt __libc_mallopt + +#pragma weak calloc = __libc_calloc +#pragma weak free = __libc_free +#pragma weak cfree = __libc_free +#pragma weak malloc = __libc_malloc +#pragma weak memalign = __libc_memalign +#pragma weak realloc = __libc_realloc +#pragma weak valloc = __libc_valloc +#pragma weak pvalloc = __libc_pvalloc +#pragma weak mallinfo = __libc_mallinfo +#pragma weak mallopt = __libc_mallopt + +#else + +void malloc_simple_info(void); + +/** + * malloc_enable_testing() - Put malloc() into test mode + * + * This only works if UNIT_TESTING is enabled + * + * @max_allocs: return -ENOMEM after max_allocs calls to malloc() + */ +void malloc_enable_testing(int max_allocs); + +/** malloc_disable_testing() - Put malloc() into normal mode */ +void malloc_disable_testing(void); + +#if CONFIG_IS_ENABLED(SYS_MALLOC_SIMPLE) +#define malloc malloc_simple +#define realloc realloc_simple +#define calloc calloc_simple +#define memalign memalign_simple +#if IS_ENABLED(CONFIG_VALGRIND) +#define free free_simple +#else +static inline void free(void *ptr) {} +#endif +void *calloc(size_t nmemb, size_t size); +void *realloc_simple(void *ptr, size_t size); +#else + +# ifdef USE_DL_PREFIX +# define cALLOc dlcalloc +# define fREe dlfree +# define mALLOc dlmalloc +# define mEMALIGn dlmemalign +# define rEALLOc dlrealloc +# define vALLOc dlvalloc +# define pvALLOc dlpvalloc +# define mALLINFo dlmallinfo +# define mALLOPt dlmallopt + +/* Ensure that U-Boot actually uses these too */ +#define calloc dlcalloc +#define free(ptr) dlfree(ptr) +#define malloc(x) dlmalloc(x) +#define memalign dlmemalign +#define realloc dlrealloc +#define valloc dlvalloc +#define pvalloc dlpvalloc +#define mallinfo() dlmallinfo() +#define mallopt dlmallopt +#define malloc_trim dlmalloc_trim +#define malloc_usable_size dlmalloc_usable_size +#define malloc_stats dlmalloc_stats + +# else /* USE_DL_PREFIX */ +# define cALLOc calloc +# define fREe free +# define mALLOc malloc +# define mEMALIGn memalign +# define rEALLOc realloc +# define vALLOc valloc +# define pvALLOc pvalloc +# define mALLINFo mallinfo +# define mALLOPt mallopt +# endif /* USE_DL_PREFIX */ + +#endif + +/* Set up pre-relocation malloc() ready for use */ +int initf_malloc(void); + +/* Public routines */ + +/* Simple versions which can be used when space is tight */ +void *malloc_simple(size_t size); +void *memalign_simple(size_t alignment, size_t bytes); + +#pragma GCC visibility push(hidden) +# if __STD_C + +Void_t* mALLOc(size_t); +void fREe(Void_t*); +Void_t* rEALLOc(Void_t*, size_t); +Void_t* mEMALIGn(size_t, size_t); +Void_t* vALLOc(size_t); +Void_t* pvALLOc(size_t); +Void_t* cALLOc(size_t, size_t); +void cfree(Void_t*); +int malloc_trim(size_t); +size_t malloc_usable_size(Void_t*); +void malloc_stats(void); +int mALLOPt(int, int); +struct mallinfo mALLINFo(void); +# else +Void_t* mALLOc(); +void fREe(); +Void_t* rEALLOc(); +Void_t* mEMALIGn(); +Void_t* vALLOc(); +Void_t* pvALLOc(); +Void_t* cALLOc(); +void cfree(); +int malloc_trim(); +size_t malloc_usable_size(); +void malloc_stats(); +int mALLOPt(); +struct mallinfo mALLINFo(); +# endif +#endif +#pragma GCC visibility pop + +/* + * Begin and End of memory area for malloc(), and current "brk" + */ +extern ulong mem_malloc_start; +extern ulong mem_malloc_end; +extern ulong mem_malloc_brk; + +/** + * mem_malloc_init() - Set up the malloc() pool + * + * Sets the region of memory to be used for all future calls to malloc(), etc. + * + * @start: Start address + * @size: Size in bytes + */ +void mem_malloc_init(ulong start, ulong size); + +#ifdef __cplusplus +}; /* end of extern "C" */ +#endif + +#endif /* __MALLOC_H__ */ From patchwork Mon Dec 1 17:04:46 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 978 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1765999492; bh=/KgoGNKXR1QzRr/SeJ4JRoh8Tql2XHjDo6yT3/j4wUE=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=cljF8Y6uN39wUVmJxKVO3KlDiKQUtr4eyQPndB/N8A/rfO0B+Fnz9/rLQQUw1riZi cBUbcS1/QKnHL/AoZRL4RVEXHzpFPZiul4q9D4ZreJVXL2v53hV760jrislhnOWS5g 1XivLapPGPGFgpnj/bTHMfb1DHX9+0W/9LYXk+EbUbDb1CZR+45IXgY2+Yfuw+1n+4 +6n7lRgPI0Z1/bjiY5vLrGGltX3XzE231DS5tGbNe+rt9h9VVPbbe1rKz7QGw64Zhd PfrfKjyPrSVOcnkZLn8h2/5QfAogzMzMfz33epADYvxMCX9OJ6b7LKuSqWp6/apXV1 gtozOyNiurOmA== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 0EA4D68C1F for ; Wed, 17 Dec 2025 12:24:52 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id FlxjeX85F6H4 for ; Wed, 17 Dec 2025 12:24:51 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1765999490; bh=/KgoGNKXR1QzRr/SeJ4JRoh8Tql2XHjDo6yT3/j4wUE=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=heKRmh0vCSANMUjrx//1zmq8hplou61FS7PypnDUjs8AWt+9UKQ9eJ9RXWcXx7a2l Fz66m8XuPom3EOoIHaijkEdi3CEnWgNFqLr1hk3mFBvAB8smw0KzSBE7FkXWJlPqh3 AwPfuyNbqrcvGGMSEwgLcm4Ek+xlgzDlQ+7e9omDv5d7Z1BOyH7zxSen32zl3MdS44 mwmigNjIvVWwSoc0qK9TqLLDVwt4vPUowGHSw0D5t3QFZ05JV0+qBAKzl3i1xSpKlF 9NwdiBuzViK0xsZYrXejfk1ixC3hZjmNF35HquKbIkUb9/PJ1AKWL230x99MPxWG7J kozm8wg5IL1GA== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id CE94168C1E for ; Wed, 17 Dec 2025 12:24:50 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608890; bh=7PH5bVjIcLMKbCLaH1O/32DyDlkFiVjDzkza3EOwhdA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jCjWt0no+Bny6w0BhPYDyljY7H0zROkn6JLPLn0fr+j8A/YP3/UO/egud95CtMusO 9vB0ehX3jV/Pq5lMov2IKVtCDpnfeKxIzOYx/VsYEmQPZWa8LQWQ53wVRE/YGWIb7M dzyuHH/WM4DEmH8vPkVWuPkmECqlLtxDpSSDxeNGE9PhSKTJ37lYJv2i51c3yf54Jy ZipkwWS3RMvc1F+g655kPMrU5ZwDC/NS8NcmZDNm6Zh4zLyFuJYoCYspG4Z5OUPyKP T59mzyJ6bRZmtEDPZpp8bepz3buOS2lrZgMNgsDmNKpoCcjebwUhWRrskPzWtgGC6F 6nzRtIiH0fopg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 33666688A7; Mon, 1 Dec 2025 10:08:10 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id y7uXnSqtok_f; Mon, 1 Dec 2025 10:08:10 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608885; bh=K2IMFKWBWBMGUF006b1HDfuriG8QLi7d+E1VXZKMFiI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=T6FEHBJDG6cTxwF6Wz+AbSw1sJAdM8l/WtKKhBXr6bQjODfx1iHgLMyFOn+jB7zGJ F5J2vi/aAtbkaCTgGYt/RT373gAeUOY6j4Bipf2+oeWc/nLtM3oJLn1mrFmF0G2Ex4 3qA+kny9J8F4mFvl+ogrxno5ExbzkaJU9MRD7HXPNPLDc0KUkH+2sj5lI6aCp6OJd8 j8sBJQnUV0Bx9oM+rwBpmllT/qtX1YjZdiJZRfSq9GLOBXrB8Jkz3ngTmt5BXa+v2z uBGMsJLsYU6Bvcb/fh59JbcZZVV5wg37dtUp5HdORSWszUfPHTj6+kvAKahdpRL60C booxi88emhnYA== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 4CA976881A; Mon, 1 Dec 2025 10:08:04 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:46 -0700 Message-ID: <20251201170529.3237986-8-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Hits: max-size X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; news-moderation; no-subject; digests; suspicious-header Message-ID-Hash: ZSSXNWX3UZWCFCTCIVUKW3EFIIUPR7GQ X-Message-ID-Hash: ZSSXNWX3UZWCFCTCIVUKW3EFIIUPR7GQ X-Mailman-Approved-At: Wed, 17 Dec 2025 19:24:47 -0700 CC: Heinrich Schuchardt , Simon Glass X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 07/37] malloc: Import dlmalloc 2.8.6 List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Import the latest dlmalloc (2.8.6) from Doug Lea, preserving the current version as dlmalloc_old.c/malloc_old.h for reference. For now the current version is still active. Signed-off-by: Simon Glass --- common/dlmalloc.c | 6292 +++++++++++++++++++++++++++++++++++++++++++++ include/malloc.h | 625 +++++ 2 files changed, 6917 insertions(+) create mode 100644 common/dlmalloc.c diff --git a/common/dlmalloc.c b/common/dlmalloc.c new file mode 100644 index 00000000000..ba8ca5bd0ae --- /dev/null +++ b/common/dlmalloc.c @@ -0,0 +1,6292 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* +Copyright 2023 Doug Lea + +Permission is hereby granted, free of charge, to any person obtaining +a copy of this software and associated documentation files (the +"Software"), to deal in the Software without restriction, including +without limitation the rights to use, copy, modify, merge, publish, +distribute, sublicense, and/or sell copies of the Software, and to +permit persons to whom the Software is furnished to do so. + +THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, +EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF +MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND +NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE +LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION +OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION +WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. + +* Version 2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea + Re-licensed 25 Sep 2023 with MIT-0 replacing obsolete CC0 + See https://opensource.org/license/mit-0/ + +* Quickstart + + This library is all in one file to simplify the most common usage: + ftp it, compile it (-O3), and link it into another program. All of + the compile-time options default to reasonable values for use on + most platforms. You might later want to step through various + compile-time and dynamic tuning options. + + For convenience, an include file for code using this malloc is at: + ftp://gee.cs.oswego.edu/pub/misc/malloc-2.8.6.h + You don't really need this .h file unless you call functions not + defined in your system include files. The .h file contains only the + excerpts from this file needed for using this malloc on ANSI C/C++ + systems, so long as you haven't changed compile-time options about + naming and tuning parameters. If you do, then you can create your + own malloc.h that does include all settings by cutting at the point + indicated below. Note that you may already by default be using a C + library containing a malloc that is based on some version of this + malloc (for example in linux). You might still want to use the one + in this file to customize settings or to avoid overheads associated + with library versions. + +* Vital statistics: + + Supported pointer/size_t representation: 4 or 8 bytes + size_t MUST be an unsigned type of the same width as + pointers. (If you are using an ancient system that declares + size_t as a signed type, or need it to be a different width + than pointers, you can use a previous release of this malloc + (e.g. 2.7.2) supporting these.) + + Alignment: 8 bytes (minimum) + This suffices for nearly all current machines and C compilers. + However, you can define MALLOC_ALIGNMENT to be wider than this + if necessary (up to 128bytes), at the expense of using more space. + + Minimum overhead per allocated chunk: 4 or 8 bytes (if 4byte sizes) + 8 or 16 bytes (if 8byte sizes) + Each malloced chunk has a hidden word of overhead holding size + and status information, and additional cross-check word + if FOOTERS is defined. + + Minimum allocated size: 4-byte ptrs: 16 bytes (including overhead) + 8-byte ptrs: 32 bytes (including overhead) + + Even a request for zero bytes (i.e., malloc(0)) returns a + pointer to something of the minimum allocatable size. + The maximum overhead wastage (i.e., number of extra bytes + allocated than were requested in malloc) is less than or equal + to the minimum size, except for requests >= mmap_threshold that + are serviced via mmap(), where the worst case wastage is about + 32 bytes plus the remainder from a system page (the minimal + mmap unit); typically 4096 or 8192 bytes. + + Security: static-safe; optionally more or less + The "security" of malloc refers to the ability of malicious + code to accentuate the effects of errors (for example, freeing + space that is not currently malloc'ed or overwriting past the + ends of chunks) in code that calls malloc. This malloc + guarantees not to modify any memory locations below the base of + heap, i.e., static variables, even in the presence of usage + errors. The routines additionally detect most improper frees + and reallocs. All this holds as long as the static bookkeeping + for malloc itself is not corrupted by some other means. This + is only one aspect of security -- these checks do not, and + cannot, detect all possible programming errors. + + If FOOTERS is defined nonzero, then each allocated chunk + carries an additional check word to verify that it was malloced + from its space. These check words are the same within each + execution of a program using malloc, but differ across + executions, so externally crafted fake chunks cannot be + freed. This improves security by rejecting frees/reallocs that + could corrupt heap memory, in addition to the checks preventing + writes to statics that are always on. This may further improve + security at the expense of time and space overhead. (Note that + FOOTERS may also be worth using with MSPACES.) + + By default detected errors cause the program to abort (calling + "abort()"). You can override this to instead proceed past + errors by defining PROCEED_ON_ERROR. In this case, a bad free + has no effect, and a malloc that encounters a bad address + caused by user overwrites will ignore the bad address by + dropping pointers and indices to all known memory. This may + be appropriate for programs that should continue if at all + possible in the face of programming errors, although they may + run out of memory because dropped memory is never reclaimed. + + If you don't like either of these options, you can define + CORRUPTION_ERROR_ACTION and USAGE_ERROR_ACTION to do anything + else. And if if you are sure that your program using malloc has + no errors or vulnerabilities, you can define INSECURE to 1, + which might (or might not) provide a small performance improvement. + + It is also possible to limit the maximum total allocatable + space, using malloc_set_footprint_limit. This is not + designed as a security feature in itself (calls to set limits + are not screened or privileged), but may be useful as one + aspect of a secure implementation. + + Thread-safety: NOT thread-safe unless USE_LOCKS defined non-zero + When USE_LOCKS is defined, each public call to malloc, free, + etc is surrounded with a lock. By default, this uses a plain + pthread mutex, win32 critical section, or a spin-lock if if + available for the platform and not disabled by setting + USE_SPIN_LOCKS=0. However, if USE_RECURSIVE_LOCKS is defined, + recursive versions are used instead (which are not required for + base functionality but may be needed in layered extensions). + Using a global lock is not especially fast, and can be a major + bottleneck. It is designed only to provide minimal protection + in concurrent environments, and to provide a basis for + extensions. If you are using malloc in a concurrent program, + consider instead using nedmalloc + (http://www.nedprod.com/programs/portable/nedmalloc/) or + ptmalloc (See http://www.malloc.de), which are derived from + versions of this malloc. + + System requirements: Any combination of MORECORE and/or MMAP/MUNMAP + This malloc can use unix sbrk or any emulation (invoked using + the CALL_MORECORE macro) and/or mmap/munmap or any emulation + (invoked using CALL_MMAP/CALL_MUNMAP) to get and release system + memory. On most unix systems, it tends to work best if both + MORECORE and MMAP are enabled. On Win32, it uses emulations + based on VirtualAlloc. It also uses common C library functions + like memset. + + Compliance: I believe it is compliant with the Single Unix Specification + (See http://www.unix.org). Also SVID/XPG, ANSI C, and probably + others as well. + +* Overview of algorithms + + This is not the fastest, most space-conserving, most portable, or + most tunable malloc ever written. However it is among the fastest + while also being among the most space-conserving, portable and + tunable. Consistent balance across these factors results in a good + general-purpose allocator for malloc-intensive programs. + + In most ways, this malloc is a best-fit allocator. Generally, it + chooses the best-fitting existing chunk for a request, with ties + broken in approximately least-recently-used order. (This strategy + normally maintains low fragmentation.) However, for requests less + than 256bytes, it deviates from best-fit when there is not an + exactly fitting available chunk by preferring to use space adjacent + to that used for the previous small request, as well as by breaking + ties in approximately most-recently-used order. (These enhance + locality of series of small allocations.) And for very large requests + (>= 256Kb by default), it relies on system memory mapping + facilities, if supported. (This helps avoid carrying around and + possibly fragmenting memory used only for large chunks.) + + All operations (except malloc_stats and mallinfo) have execution + times that are bounded by a constant factor of the number of bits in + a size_t, not counting any clearing in calloc or copying in realloc, + or actions surrounding MORECORE and MMAP that have times + proportional to the number of non-contiguous regions returned by + system allocation routines, which is often just 1. In real-time + applications, you can optionally suppress segment traversals using + NO_SEGMENT_TRAVERSAL, which assures bounded execution even when + system allocators return non-contiguous spaces, at the typical + expense of carrying around more memory and increased fragmentation. + + The implementation is not very modular and seriously overuses + macros. Perhaps someday all C compilers will do as good a job + inlining modular code as can now be done by brute-force expansion, + but now, enough of them seem not to. + + Some compilers issue a lot of warnings about code that is + dead/unreachable only on some platforms, and also about intentional + uses of negation on unsigned types. All known cases of each can be + ignored. + + For a longer but out of date high-level description, see + http://gee.cs.oswego.edu/dl/html/malloc.html + +* MSPACES + If MSPACES is defined, then in addition to malloc, free, etc., + this file also defines mspace_malloc, mspace_free, etc. These + are versions of malloc routines that take an "mspace" argument + obtained using create_mspace, to control all internal bookkeeping. + If ONLY_MSPACES is defined, only these versions are compiled. + So if you would like to use this allocator for only some allocations, + and your system malloc for others, you can compile with + ONLY_MSPACES and then do something like... + static mspace mymspace = create_mspace(0,0); // for example + #define mymalloc(bytes) mspace_malloc(mymspace, bytes) + + (Note: If you only need one instance of an mspace, you can instead + use "USE_DL_PREFIX" to relabel the global malloc.) + + You can similarly create thread-local allocators by storing + mspaces as thread-locals. For example: + static __thread mspace tlms = 0; + void* tlmalloc(size_t bytes) { + if (tlms == 0) tlms = create_mspace(0, 0); + return mspace_malloc(tlms, bytes); + } + void tlfree(void* mem) { mspace_free(tlms, mem); } + + Unless FOOTERS is defined, each mspace is completely independent. + You cannot allocate from one and free to another (although + conformance is only weakly checked, so usage errors are not always + caught). If FOOTERS is defined, then each chunk carries around a tag + indicating its originating mspace, and frees are directed to their + originating spaces. Normally, this requires use of locks. + + ------------------------- Compile-time options --------------------------- + +Be careful in setting #define values for numerical constants of type +size_t. On some systems, literal values are not automatically extended +to size_t precision unless they are explicitly casted. You can also +use the symbolic values MAX_SIZE_T, SIZE_T_ONE, etc below. + +WIN32 default: defined if _WIN32 defined + Defining WIN32 sets up defaults for MS environment and compilers. + Otherwise defaults are for unix. Beware that there seem to be some + cases where this malloc might not be a pure drop-in replacement for + Win32 malloc: Random-looking failures from Win32 GDI API's (eg; + SetDIBits()) may be due to bugs in some video driver implementations + when pixel buffers are malloc()ed, and the region spans more than + one VirtualAlloc()ed region. Because dlmalloc uses a small (64Kb) + default granularity, pixel buffers may straddle virtual allocation + regions more often than when using the Microsoft allocator. You can + avoid this by using VirtualAlloc() and VirtualFree() for all pixel + buffers rather than using malloc(). If this is not possible, + recompile this malloc with a larger DEFAULT_GRANULARITY. Note: + in cases where MSC and gcc (cygwin) are known to differ on WIN32, + conditions use _MSC_VER to distinguish them. + +DLMALLOC_EXPORT default: extern + Defines how public APIs are declared. If you want to export via a + Windows DLL, you might define this as + #define DLMALLOC_EXPORT extern __declspec(dllexport) + If you want a POSIX ELF shared object, you might use + #define DLMALLOC_EXPORT extern __attribute__((visibility("default"))) + +MALLOC_ALIGNMENT default: (size_t)(2 * sizeof(void *)) + Controls the minimum alignment for malloc'ed chunks. It must be a + power of two and at least 8, even on machines for which smaller + alignments would suffice. It may be defined as larger than this + though. Note however that code and data structures are optimized for + the case of 8-byte alignment. + +MSPACES default: 0 (false) + If true, compile in support for independent allocation spaces. + This is only supported if HAVE_MMAP is true. + +ONLY_MSPACES default: 0 (false) + If true, only compile in mspace versions, not regular versions. + +USE_LOCKS default: 0 (false) + Causes each call to each public routine to be surrounded with + pthread or WIN32 mutex lock/unlock. (If set true, this can be + overridden on a per-mspace basis for mspace versions.) If set to a + non-zero value other than 1, locks are used, but their + implementation is left out, so lock functions must be supplied manually, + as described below. + +USE_SPIN_LOCKS default: 1 iff USE_LOCKS and spin locks available + If true, uses custom spin locks for locking. This is currently + supported only gcc >= 4.1, older gccs on x86 platforms, and recent + MS compilers. Otherwise, posix locks or win32 critical sections are + used. + +USE_RECURSIVE_LOCKS default: not defined + If defined nonzero, uses recursive (aka reentrant) locks, otherwise + uses plain mutexes. This is not required for malloc proper, but may + be needed for layered allocators such as nedmalloc. + +LOCK_AT_FORK default: not defined + If defined nonzero, performs pthread_atfork upon initialization + to initialize child lock while holding parent lock. The implementation + assumes that pthread locks (not custom locks) are being used. In other + cases, you may need to customize the implementation. + +FOOTERS default: 0 + If true, provide extra checking and dispatching by placing + information in the footers of allocated chunks. This adds + space and time overhead. + +INSECURE default: 0 + If true, omit checks for usage errors and heap space overwrites. + +USE_DL_PREFIX default: NOT defined + Causes compiler to prefix all public routines with the string 'dl'. + This can be useful when you only want to use this malloc in one part + of a program, using your regular system malloc elsewhere. + +MALLOC_INSPECT_ALL default: NOT defined + If defined, compiles malloc_inspect_all and mspace_inspect_all, that + perform traversal of all heap space. Unless access to these + functions is otherwise restricted, you probably do not want to + include them in secure implementations. + +ABORT default: defined as abort() + Defines how to abort on failed checks. On most systems, a failed + check cannot die with an "assert" or even print an informative + message, because the underlying print routines in turn call malloc, + which will fail again. Generally, the best policy is to simply call + abort(). It's not very useful to do more than this because many + errors due to overwriting will show up as address faults (null, odd + addresses etc) rather than malloc-triggered checks, so will also + abort. Also, most compilers know that abort() does not return, so + can better optimize code conditionally calling it. + +PROCEED_ON_ERROR default: defined as 0 (false) + Controls whether detected bad addresses cause them to bypassed + rather than aborting. If set, detected bad arguments to free and + realloc are ignored. And all bookkeeping information is zeroed out + upon a detected overwrite of freed heap space, thus losing the + ability to ever return it from malloc again, but enabling the + application to proceed. If PROCEED_ON_ERROR is defined, the + static variable malloc_corruption_error_count is compiled in + and can be examined to see if errors have occurred. This option + generates slower code than the default abort policy. + +DEBUG default: NOT defined + The DEBUG setting is mainly intended for people trying to modify + this code or diagnose problems when porting to new platforms. + However, it may also be able to better isolate user errors than just + using runtime checks. The assertions in the check routines spell + out in more detail the assumptions and invariants underlying the + algorithms. The checking is fairly extensive, and will slow down + execution noticeably. Calling malloc_stats or mallinfo with DEBUG + set will attempt to check every non-mmapped allocated and free chunk + in the course of computing the summaries. + +ABORT_ON_ASSERT_FAILURE default: defined as 1 (true) + Debugging assertion failures can be nearly impossible if your + version of the assert macro causes malloc to be called, which will + lead to a cascade of further failures, blowing the runtime stack. + ABORT_ON_ASSERT_FAILURE cause assertions failures to call abort(), + which will usually make debugging easier. + +MALLOC_FAILURE_ACTION default: sets errno to ENOMEM, or no-op on win32 + The action to take before "return 0" when malloc fails to be able to + return memory because there is none available. + +HAVE_MORECORE default: 1 (true) unless win32 or ONLY_MSPACES + True if this system supports sbrk or an emulation of it. + +MORECORE default: sbrk + The name of the sbrk-style system routine to call to obtain more + memory. See below for guidance on writing custom MORECORE + functions. The type of the argument to sbrk/MORECORE varies across + systems. It cannot be size_t, because it supports negative + arguments, so it is normally the signed type of the same width as + size_t (sometimes declared as "intptr_t"). It doesn't much matter + though. Internally, we only call it with arguments less than half + the max value of a size_t, which should work across all reasonable + possibilities, although sometimes generating compiler warnings. + +MORECORE_CONTIGUOUS default: 1 (true) if HAVE_MORECORE + If true, take advantage of fact that consecutive calls to MORECORE + with positive arguments always return contiguous increasing + addresses. This is true of unix sbrk. It does not hurt too much to + set it true anyway, since malloc copes with non-contiguities. + Setting it false when definitely non-contiguous saves time + and possibly wasted space it would take to discover this though. + +MORECORE_CANNOT_TRIM default: NOT defined + True if MORECORE cannot release space back to the system when given + negative arguments. This is generally necessary only if you are + using a hand-crafted MORECORE function that cannot handle negative + arguments. + +NO_SEGMENT_TRAVERSAL default: 0 + If non-zero, suppresses traversals of memory segments + returned by either MORECORE or CALL_MMAP. This disables + merging of segments that are contiguous, and selectively + releasing them to the OS if unused, but bounds execution times. + +HAVE_MMAP default: 1 (true) + True if this system supports mmap or an emulation of it. If so, and + HAVE_MORECORE is not true, MMAP is used for all system + allocation. If set and HAVE_MORECORE is true as well, MMAP is + primarily used to directly allocate very large blocks. It is also + used as a backup strategy in cases where MORECORE fails to provide + space from system. Note: A single call to MUNMAP is assumed to be + able to unmap memory that may have be allocated using multiple calls + to MMAP, so long as they are adjacent. + +HAVE_MREMAP default: 1 on linux, else 0 + If true realloc() uses mremap() to re-allocate large blocks and + extend or shrink allocation spaces. + +MMAP_CLEARS default: 1 except on WINCE. + True if mmap clears memory so calloc doesn't need to. This is true + for standard unix mmap using /dev/zero and on WIN32 except for WINCE. + +USE_BUILTIN_FFS default: 0 (i.e., not used) + Causes malloc to use the builtin ffs() function to compute indices. + Some compilers may recognize and intrinsify ffs to be faster than the + supplied C version. Also, the case of x86 using gcc is special-cased + to an asm instruction, so is already as fast as it can be, and so + this setting has no effect. Similarly for Win32 under recent MS compilers. + (On most x86s, the asm version is only slightly faster than the C version.) + +malloc_getpagesize default: derive from system includes, or 4096. + The system page size. To the extent possible, this malloc manages + memory from the system in page-size units. This may be (and + usually is) a function rather than a constant. This is ignored + if WIN32, where page size is determined using getSystemInfo during + initialization. + +USE_DEV_RANDOM default: 0 (i.e., not used) + Causes malloc to use /dev/random to initialize secure magic seed for + stamping footers. Otherwise, the current time is used. + +NO_MALLINFO default: 0 + If defined, don't compile "mallinfo". This can be a simple way + of dealing with mismatches between system declarations and + those in this file. + +MALLINFO_FIELD_TYPE default: size_t + The type of the fields in the mallinfo struct. This was originally + defined as "int" in SVID etc, but is more usefully defined as + size_t. The value is used only if HAVE_USR_INCLUDE_MALLOC_H is not set + +NO_MALLOC_STATS default: 0 + If defined, don't compile "malloc_stats". This avoids calls to + fprintf and bringing in stdio dependencies you might not want. + +REALLOC_ZERO_BYTES_FREES default: not defined + This should be set if a call to realloc with zero bytes should + be the same as a call to free. Some people think it should. Otherwise, + since this malloc returns a unique pointer for malloc(0), so does + realloc(p, 0). + +LACKS_UNISTD_H, LACKS_FCNTL_H, LACKS_SYS_PARAM_H, LACKS_SYS_MMAN_H +LACKS_STRINGS_H, LACKS_STRING_H, LACKS_SYS_TYPES_H, LACKS_ERRNO_H +LACKS_STDLIB_H LACKS_SCHED_H LACKS_TIME_H default: NOT defined unless on WIN32 + Define these if your system does not have these header files. + You might need to manually insert some of the declarations they provide. + +DEFAULT_GRANULARITY default: page size if MORECORE_CONTIGUOUS, + system_info.dwAllocationGranularity in WIN32, + otherwise 64K. + Also settable using mallopt(M_GRANULARITY, x) + The unit for allocating and deallocating memory from the system. On + most systems with contiguous MORECORE, there is no reason to + make this more than a page. However, systems with MMAP tend to + either require or encourage larger granularities. You can increase + this value to prevent system allocation functions to be called so + often, especially if they are slow. The value must be at least one + page and must be a power of two. Setting to 0 causes initialization + to either page size or win32 region size. (Note: In previous + versions of malloc, the equivalent of this option was called + "TOP_PAD") + +DEFAULT_TRIM_THRESHOLD default: 2MB + Also settable using mallopt(M_TRIM_THRESHOLD, x) + The maximum amount of unused top-most memory to keep before + releasing via malloc_trim in free(). Automatic trimming is mainly + useful in long-lived programs using contiguous MORECORE. Because + trimming via sbrk can be slow on some systems, and can sometimes be + wasteful (in cases where programs immediately afterward allocate + more large chunks) the value should be high enough so that your + overall system performance would improve by releasing this much + memory. As a rough guide, you might set to a value close to the + average size of a process (program) running on your system. + Releasing this much memory would allow such a process to run in + memory. Generally, it is worth tuning trim thresholds when a + program undergoes phases where several large chunks are allocated + and released in ways that can reuse each other's storage, perhaps + mixed with phases where there are no such chunks at all. The trim + value must be greater than page size to have any useful effect. To + disable trimming completely, you can set to MAX_SIZE_T. Note that the trick + some people use of mallocing a huge space and then freeing it at + program startup, in an attempt to reserve system memory, doesn't + have the intended effect under automatic trimming, since that memory + will immediately be returned to the system. + +DEFAULT_MMAP_THRESHOLD default: 256K + Also settable using mallopt(M_MMAP_THRESHOLD, x) + The request size threshold for using MMAP to directly service a + request. Requests of at least this size that cannot be allocated + using already-existing space will be serviced via mmap. (If enough + normal freed space already exists it is used instead.) Using mmap + segregates relatively large chunks of memory so that they can be + individually obtained and released from the host system. A request + serviced through mmap is never reused by any other request (at least + not directly; the system may just so happen to remap successive + requests to the same locations). Segregating space in this way has + the benefits that: Mmapped space can always be individually released + back to the system, which helps keep the system level memory demands + of a long-lived program low. Also, mapped memory doesn't become + `locked' between other chunks, as can happen with normally allocated + chunks, which means that even trimming via malloc_trim would not + release them. However, it has the disadvantage that the space + cannot be reclaimed, consolidated, and then used to service later + requests, as happens with normal chunks. The advantages of mmap + nearly always outweigh disadvantages for "large" chunks, but the + value of "large" may vary across systems. The default is an + empirically derived value that works well in most systems. You can + disable mmap by setting to MAX_SIZE_T. + +MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP + The number of consolidated frees between checks to release + unused segments when freeing. When using non-contiguous segments, + especially with multiple mspaces, checking only for topmost space + doesn't always suffice to trigger trimming. To compensate for this, + free() will, with a period of MAX_RELEASE_CHECK_RATE (or the + current number of segments, if greater) try to release unused + segments to the OS when freeing chunks that result in + consolidation. The best value for this parameter is a compromise + between slowing down frees with relatively costly checks that + rarely trigger versus holding on to unused memory. To effectively + disable, set to MAX_SIZE_T. This may lead to a very slight speed + improvement at the expense of carrying around more memory. +*/ + +/* Version identifier to allow people to support multiple versions */ +#ifndef DLMALLOC_VERSION +#define DLMALLOC_VERSION 20806 +#endif /* DLMALLOC_VERSION */ + +#ifndef DLMALLOC_EXPORT +#define DLMALLOC_EXPORT extern +#endif + +#ifndef WIN32 +#ifdef _WIN32 +#define WIN32 1 +#endif /* _WIN32 */ +#ifdef _WIN32_WCE +#define LACKS_FCNTL_H +#define WIN32 1 +#endif /* _WIN32_WCE */ +#endif /* WIN32 */ +#ifdef WIN32 +#define WIN32_LEAN_AND_MEAN +#include +#include +#define HAVE_MMAP 1 +#define HAVE_MORECORE 0 +#define LACKS_UNISTD_H +#define LACKS_SYS_PARAM_H +#define LACKS_SYS_MMAN_H +#define LACKS_STRING_H +#define LACKS_STRINGS_H +#define LACKS_SYS_TYPES_H +#define LACKS_ERRNO_H +#define LACKS_SCHED_H +#ifndef MALLOC_FAILURE_ACTION +#define MALLOC_FAILURE_ACTION +#endif /* MALLOC_FAILURE_ACTION */ +#ifndef MMAP_CLEARS +#ifdef _WIN32_WCE /* WINCE reportedly does not clear */ +#define MMAP_CLEARS 0 +#else +#define MMAP_CLEARS 1 +#endif /* _WIN32_WCE */ +#endif /*MMAP_CLEARS */ +#endif /* WIN32 */ + +#if defined(DARWIN) || defined(_DARWIN) +/* Mac OSX docs advise not to use sbrk; it seems better to use mmap */ +#ifndef HAVE_MORECORE +#define HAVE_MORECORE 0 +#define HAVE_MMAP 1 +/* OSX allocators provide 16 byte alignment */ +#ifndef MALLOC_ALIGNMENT +#define MALLOC_ALIGNMENT ((size_t)16U) +#endif +#endif /* HAVE_MORECORE */ +#endif /* DARWIN */ + +#ifndef LACKS_SYS_TYPES_H +#include /* For size_t */ +#endif /* LACKS_SYS_TYPES_H */ + +/* The maximum possible size_t value has all bits set */ +#define MAX_SIZE_T (~(size_t)0) + +#ifndef USE_LOCKS /* ensure true if spin or recursive locks set */ +#define USE_LOCKS ((defined(USE_SPIN_LOCKS) && USE_SPIN_LOCKS != 0) || \ + (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0)) +#endif /* USE_LOCKS */ + +#if USE_LOCKS /* Spin locks for gcc >= 4.1, older gcc on x86, MSC >= 1310 */ +#if ((defined(__GNUC__) && \ + ((__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) || \ + defined(__i386__) || defined(__x86_64__))) || \ + (defined(_MSC_VER) && _MSC_VER>=1310)) +#ifndef USE_SPIN_LOCKS +#define USE_SPIN_LOCKS 1 +#endif /* USE_SPIN_LOCKS */ +#elif USE_SPIN_LOCKS +#error "USE_SPIN_LOCKS defined without implementation" +#endif /* ... locks available... */ +#elif !defined(USE_SPIN_LOCKS) +#define USE_SPIN_LOCKS 0 +#endif /* USE_LOCKS */ + +#ifndef ONLY_MSPACES +#define ONLY_MSPACES 0 +#endif /* ONLY_MSPACES */ +#ifndef MSPACES +#if ONLY_MSPACES +#define MSPACES 1 +#else /* ONLY_MSPACES */ +#define MSPACES 0 +#endif /* ONLY_MSPACES */ +#endif /* MSPACES */ +#ifndef MALLOC_ALIGNMENT +#define MALLOC_ALIGNMENT ((size_t)(2 * sizeof(void *))) +#endif /* MALLOC_ALIGNMENT */ +#ifndef FOOTERS +#define FOOTERS 0 +#endif /* FOOTERS */ +#ifndef ABORT +#define ABORT abort() +#endif /* ABORT */ +#ifndef ABORT_ON_ASSERT_FAILURE +#define ABORT_ON_ASSERT_FAILURE 1 +#endif /* ABORT_ON_ASSERT_FAILURE */ +#ifndef PROCEED_ON_ERROR +#define PROCEED_ON_ERROR 0 +#endif /* PROCEED_ON_ERROR */ + +#ifndef INSECURE +#define INSECURE 0 +#endif /* INSECURE */ +#ifndef MALLOC_INSPECT_ALL +#define MALLOC_INSPECT_ALL 0 +#endif /* MALLOC_INSPECT_ALL */ +#ifndef HAVE_MMAP +#define HAVE_MMAP 1 +#endif /* HAVE_MMAP */ +#ifndef MMAP_CLEARS +#define MMAP_CLEARS 1 +#endif /* MMAP_CLEARS */ +#ifndef HAVE_MREMAP +#ifdef linux +#define HAVE_MREMAP 1 +#define _GNU_SOURCE /* Turns on mremap() definition */ +#else /* linux */ +#define HAVE_MREMAP 0 +#endif /* linux */ +#endif /* HAVE_MREMAP */ +#ifndef MALLOC_FAILURE_ACTION +#define MALLOC_FAILURE_ACTION errno = ENOMEM; +#endif /* MALLOC_FAILURE_ACTION */ +#ifndef HAVE_MORECORE +#if ONLY_MSPACES +#define HAVE_MORECORE 0 +#else /* ONLY_MSPACES */ +#define HAVE_MORECORE 1 +#endif /* ONLY_MSPACES */ +#endif /* HAVE_MORECORE */ +#if !HAVE_MORECORE +#define MORECORE_CONTIGUOUS 0 +#else /* !HAVE_MORECORE */ +#define MORECORE_DEFAULT sbrk +#ifndef MORECORE_CONTIGUOUS +#define MORECORE_CONTIGUOUS 1 +#endif /* MORECORE_CONTIGUOUS */ +#endif /* HAVE_MORECORE */ +#ifndef DEFAULT_GRANULARITY +#if (MORECORE_CONTIGUOUS || defined(WIN32)) +#define DEFAULT_GRANULARITY (0) /* 0 means to compute in init_mparams */ +#else /* MORECORE_CONTIGUOUS */ +#define DEFAULT_GRANULARITY ((size_t)64U * (size_t)1024U) +#endif /* MORECORE_CONTIGUOUS */ +#endif /* DEFAULT_GRANULARITY */ +#ifndef DEFAULT_TRIM_THRESHOLD +#ifndef MORECORE_CANNOT_TRIM +#define DEFAULT_TRIM_THRESHOLD ((size_t)2U * (size_t)1024U * (size_t)1024U) +#else /* MORECORE_CANNOT_TRIM */ +#define DEFAULT_TRIM_THRESHOLD MAX_SIZE_T +#endif /* MORECORE_CANNOT_TRIM */ +#endif /* DEFAULT_TRIM_THRESHOLD */ +#ifndef DEFAULT_MMAP_THRESHOLD +#if HAVE_MMAP +#define DEFAULT_MMAP_THRESHOLD ((size_t)256U * (size_t)1024U) +#else /* HAVE_MMAP */ +#define DEFAULT_MMAP_THRESHOLD MAX_SIZE_T +#endif /* HAVE_MMAP */ +#endif /* DEFAULT_MMAP_THRESHOLD */ +#ifndef MAX_RELEASE_CHECK_RATE +#if HAVE_MMAP +#define MAX_RELEASE_CHECK_RATE 4095 +#else +#define MAX_RELEASE_CHECK_RATE MAX_SIZE_T +#endif /* HAVE_MMAP */ +#endif /* MAX_RELEASE_CHECK_RATE */ +#ifndef USE_BUILTIN_FFS +#define USE_BUILTIN_FFS 0 +#endif /* USE_BUILTIN_FFS */ +#ifndef USE_DEV_RANDOM +#define USE_DEV_RANDOM 0 +#endif /* USE_DEV_RANDOM */ +#ifndef NO_MALLINFO +#define NO_MALLINFO 0 +#endif /* NO_MALLINFO */ +#ifndef MALLINFO_FIELD_TYPE +#define MALLINFO_FIELD_TYPE size_t +#endif /* MALLINFO_FIELD_TYPE */ +#ifndef NO_MALLOC_STATS +#define NO_MALLOC_STATS 0 +#endif /* NO_MALLOC_STATS */ +#ifndef NO_SEGMENT_TRAVERSAL +#define NO_SEGMENT_TRAVERSAL 0 +#endif /* NO_SEGMENT_TRAVERSAL */ + +/* + mallopt tuning options. SVID/XPG defines four standard parameter + numbers for mallopt, normally defined in malloc.h. None of these + are used in this malloc, so setting them has no effect. But this + malloc does support the following options. +*/ + +#define M_TRIM_THRESHOLD (-1) +#define M_GRANULARITY (-2) +#define M_MMAP_THRESHOLD (-3) + +/* ------------------------ Mallinfo declarations ------------------------ */ + +#if !NO_MALLINFO +/* + This version of malloc supports the standard SVID/XPG mallinfo + routine that returns a struct containing usage properties and + statistics. It should work on any system that has a + /usr/include/malloc.h defining struct mallinfo. The main + declaration needed is the mallinfo struct that is returned (by-copy) + by mallinfo(). The malloinfo struct contains a bunch of fields that + are not even meaningful in this version of malloc. These fields are + are instead filled by mallinfo() with other numbers that might be of + interest. + + HAVE_USR_INCLUDE_MALLOC_H should be set if you have a + /usr/include/malloc.h file that includes a declaration of struct + mallinfo. If so, it is included; else a compliant version is + declared below. These must be precisely the same for mallinfo() to + work. The original SVID version of this struct, defined on most + systems with mallinfo, declares all fields as ints. But some others + define as unsigned long. If your system defines the fields using a + type of different width than listed here, you MUST #include your + system version and #define HAVE_USR_INCLUDE_MALLOC_H. +*/ + +/* #define HAVE_USR_INCLUDE_MALLOC_H */ + +#ifdef HAVE_USR_INCLUDE_MALLOC_H +#include "/usr/include/malloc.h" +#else /* HAVE_USR_INCLUDE_MALLOC_H */ +#ifndef STRUCT_MALLINFO_DECLARED +/* HP-UX (and others?) redefines mallinfo unless _STRUCT_MALLINFO is defined */ +#define _STRUCT_MALLINFO +#define STRUCT_MALLINFO_DECLARED 1 +struct mallinfo { + MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */ + MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */ + MALLINFO_FIELD_TYPE smblks; /* always 0 */ + MALLINFO_FIELD_TYPE hblks; /* always 0 */ + MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */ + MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */ + MALLINFO_FIELD_TYPE fsmblks; /* always 0 */ + MALLINFO_FIELD_TYPE uordblks; /* total allocated space */ + MALLINFO_FIELD_TYPE fordblks; /* total free space */ + MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */ +}; +#endif /* STRUCT_MALLINFO_DECLARED */ +#endif /* HAVE_USR_INCLUDE_MALLOC_H */ +#endif /* NO_MALLINFO */ + +/* + Try to persuade compilers to inline. The most critical functions for + inlining are defined as macros, so these aren't used for them. +*/ + +#ifndef FORCEINLINE + #if defined(__GNUC__) +#define FORCEINLINE __inline __attribute__ ((always_inline)) + #elif defined(_MSC_VER) + #define FORCEINLINE __forceinline + #endif +#endif +#ifndef NOINLINE + #if defined(__GNUC__) + #define NOINLINE __attribute__ ((noinline)) + #elif defined(_MSC_VER) + #define NOINLINE __declspec(noinline) + #else + #define NOINLINE + #endif +#endif + +#ifdef __cplusplus +extern "C" { +#ifndef FORCEINLINE + #define FORCEINLINE inline +#endif +#endif /* __cplusplus */ +#ifndef FORCEINLINE + #define FORCEINLINE +#endif + +#if !ONLY_MSPACES + +/* ------------------- Declarations of public routines ------------------- */ + +#ifndef USE_DL_PREFIX +#define dlcalloc calloc +#define dlfree free +#define dlmalloc malloc +#define dlmemalign memalign +#define dlposix_memalign posix_memalign +#define dlrealloc realloc +#define dlrealloc_in_place realloc_in_place +#define dlvalloc valloc +#define dlpvalloc pvalloc +#define dlmallinfo mallinfo +#define dlmallopt mallopt +#define dlmalloc_trim malloc_trim +#define dlmalloc_stats malloc_stats +#define dlmalloc_usable_size malloc_usable_size +#define dlmalloc_footprint malloc_footprint +#define dlmalloc_max_footprint malloc_max_footprint +#define dlmalloc_footprint_limit malloc_footprint_limit +#define dlmalloc_set_footprint_limit malloc_set_footprint_limit +#define dlmalloc_inspect_all malloc_inspect_all +#define dlindependent_calloc independent_calloc +#define dlindependent_comalloc independent_comalloc +#define dlbulk_free bulk_free +#endif /* USE_DL_PREFIX */ + +/* + malloc(size_t n) + Returns a pointer to a newly allocated chunk of at least n bytes, or + null if no space is available, in which case errno is set to ENOMEM + on ANSI C systems. + + If n is zero, malloc returns a minimum-sized chunk. (The minimum + size is 16 bytes on most 32bit systems, and 32 bytes on 64bit + systems.) Note that size_t is an unsigned type, so calls with + arguments that would be negative if signed are interpreted as + requests for huge amounts of space, which will often fail. The + maximum supported value of n differs across systems, but is in all + cases less than the maximum representable value of a size_t. +*/ +DLMALLOC_EXPORT void* dlmalloc(size_t); + +/* + free(void* p) + Releases the chunk of memory pointed to by p, that had been previously + allocated using malloc or a related routine such as realloc. + It has no effect if p is null. If p was not malloced or already + freed, free(p) will by default cause the current program to abort. +*/ +DLMALLOC_EXPORT void dlfree(void*); + +/* + calloc(size_t n_elements, size_t element_size); + Returns a pointer to n_elements * element_size bytes, with all locations + set to zero. +*/ +DLMALLOC_EXPORT void* dlcalloc(size_t, size_t); + +/* + realloc(void* p, size_t n) + Returns a pointer to a chunk of size n that contains the same data + as does chunk p up to the minimum of (n, p's size) bytes, or null + if no space is available. + + The returned pointer may or may not be the same as p. The algorithm + prefers extending p in most cases when possible, otherwise it + employs the equivalent of a malloc-copy-free sequence. + + If p is null, realloc is equivalent to malloc. + + If space is not available, realloc returns null, errno is set (if on + ANSI) and p is NOT freed. + + if n is for fewer bytes than already held by p, the newly unused + space is lopped off and freed if possible. realloc with a size + argument of zero (re)allocates a minimum-sized chunk. + + The old unix realloc convention of allowing the last-free'd chunk + to be used as an argument to realloc is not supported. +*/ +DLMALLOC_EXPORT void* dlrealloc(void*, size_t); + +/* + realloc_in_place(void* p, size_t n) + Resizes the space allocated for p to size n, only if this can be + done without moving p (i.e., only if there is adjacent space + available if n is greater than p's current allocated size, or n is + less than or equal to p's size). This may be used instead of plain + realloc if an alternative allocation strategy is needed upon failure + to expand space; for example, reallocation of a buffer that must be + memory-aligned or cleared. You can use realloc_in_place to trigger + these alternatives only when needed. + + Returns p if successful; otherwise null. +*/ +DLMALLOC_EXPORT void* dlrealloc_in_place(void*, size_t); + +/* + memalign(size_t alignment, size_t n); + Returns a pointer to a newly allocated chunk of n bytes, aligned + in accord with the alignment argument. + + The alignment argument should be a power of two. If the argument is + not a power of two, the nearest greater power is used. + 8-byte alignment is guaranteed by normal malloc calls, so don't + bother calling memalign with an argument of 8 or less. + + Overreliance on memalign is a sure way to fragment space. +*/ +DLMALLOC_EXPORT void* dlmemalign(size_t, size_t); + +/* + int posix_memalign(void** pp, size_t alignment, size_t n); + Allocates a chunk of n bytes, aligned in accord with the alignment + argument. Differs from memalign only in that it (1) assigns the + allocated memory to *pp rather than returning it, (2) fails and + returns EINVAL if the alignment is not a power of two (3) fails and + returns ENOMEM if memory cannot be allocated. +*/ +DLMALLOC_EXPORT int dlposix_memalign(void**, size_t, size_t); + +/* + valloc(size_t n); + Equivalent to memalign(pagesize, n), where pagesize is the page + size of the system. If the pagesize is unknown, 4096 is used. +*/ +DLMALLOC_EXPORT void* dlvalloc(size_t); + +/* + mallopt(int parameter_number, int parameter_value) + Sets tunable parameters The format is to provide a + (parameter-number, parameter-value) pair. mallopt then sets the + corresponding parameter to the argument value if it can (i.e., so + long as the value is meaningful), and returns 1 if successful else + 0. To workaround the fact that mallopt is specified to use int, + not size_t parameters, the value -1 is specially treated as the + maximum unsigned size_t value. + + SVID/XPG/ANSI defines four standard param numbers for mallopt, + normally defined in malloc.h. None of these are use in this malloc, + so setting them has no effect. But this malloc also supports other + options in mallopt. See below for details. Briefly, supported + parameters are as follows (listed defaults are for "typical" + configurations). + + Symbol param # default allowed param values + M_TRIM_THRESHOLD -1 2*1024*1024 any (-1 disables) + M_GRANULARITY -2 page size any power of 2 >= page size + M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support) +*/ +DLMALLOC_EXPORT int dlmallopt(int, int); + +/* + malloc_footprint(); + Returns the number of bytes obtained from the system. The total + number of bytes allocated by malloc, realloc etc., is less than this + value. Unlike mallinfo, this function returns only a precomputed + result, so can be called frequently to monitor memory consumption. + Even if locks are otherwise defined, this function does not use them, + so results might not be up to date. +*/ +DLMALLOC_EXPORT size_t dlmalloc_footprint(void); + +/* + malloc_max_footprint(); + Returns the maximum number of bytes obtained from the system. This + value will be greater than current footprint if deallocated space + has been reclaimed by the system. The peak number of bytes allocated + by malloc, realloc etc., is less than this value. Unlike mallinfo, + this function returns only a precomputed result, so can be called + frequently to monitor memory consumption. Even if locks are + otherwise defined, this function does not use them, so results might + not be up to date. +*/ +DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void); + +/* + malloc_footprint_limit(); + Returns the number of bytes that the heap is allowed to obtain from + the system, returning the last value returned by + malloc_set_footprint_limit, or the maximum size_t value if + never set. The returned value reflects a permission. There is no + guarantee that this number of bytes can actually be obtained from + the system. +*/ +DLMALLOC_EXPORT size_t dlmalloc_footprint_limit(); + +/* + malloc_set_footprint_limit(); + Sets the maximum number of bytes to obtain from the system, causing + failure returns from malloc and related functions upon attempts to + exceed this value. The argument value may be subject to page + rounding to an enforceable limit; this actual value is returned. + Using an argument of the maximum possible size_t effectively + disables checks. If the argument is less than or equal to the + current malloc_footprint, then all future allocations that require + additional system memory will fail. However, invocation cannot + retroactively deallocate existing used memory. +*/ +DLMALLOC_EXPORT size_t dlmalloc_set_footprint_limit(size_t bytes); + +#if MALLOC_INSPECT_ALL +/* + malloc_inspect_all(void(*handler)(void *start, + void *end, + size_t used_bytes, + void* callback_arg), + void* arg); + Traverses the heap and calls the given handler for each managed + region, skipping all bytes that are (or may be) used for bookkeeping + purposes. Traversal does not include include chunks that have been + directly memory mapped. Each reported region begins at the start + address, and continues up to but not including the end address. The + first used_bytes of the region contain allocated data. If + used_bytes is zero, the region is unallocated. The handler is + invoked with the given callback argument. If locks are defined, they + are held during the entire traversal. It is a bad idea to invoke + other malloc functions from within the handler. + + For example, to count the number of in-use chunks with size greater + than 1000, you could write: + static int count = 0; + void count_chunks(void* start, void* end, size_t used, void* arg) { + if (used >= 1000) ++count; + } + then: + malloc_inspect_all(count_chunks, NULL); + + malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined. +*/ +DLMALLOC_EXPORT void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*), + void* arg); + +#endif /* MALLOC_INSPECT_ALL */ + +#if !NO_MALLINFO +/* + mallinfo() + Returns (by copy) a struct containing various summary statistics: + + arena: current total non-mmapped bytes allocated from system + ordblks: the number of free chunks + smblks: always zero. + hblks: current number of mmapped regions + hblkhd: total bytes held in mmapped regions + usmblks: the maximum total allocated space. This will be greater + than current total if trimming has occurred. + fsmblks: always zero + uordblks: current total allocated space (normal or mmapped) + fordblks: total free space + keepcost: the maximum number of bytes that could ideally be released + back to system via malloc_trim. ("ideally" means that + it ignores page restrictions etc.) + + Because these fields are ints, but internal bookkeeping may + be kept as longs, the reported values may wrap around zero and + thus be inaccurate. +*/ +DLMALLOC_EXPORT struct mallinfo dlmallinfo(void); +#endif /* NO_MALLINFO */ + +/* + independent_calloc(size_t n_elements, size_t element_size, void* chunks[]); + + independent_calloc is similar to calloc, but instead of returning a + single cleared space, it returns an array of pointers to n_elements + independent elements that can hold contents of size elem_size, each + of which starts out cleared, and can be independently freed, + realloc'ed etc. The elements are guaranteed to be adjacently + allocated (this is not guaranteed to occur with multiple callocs or + mallocs), which may also improve cache locality in some + applications. + + The "chunks" argument is optional (i.e., may be null, which is + probably the most typical usage). If it is null, the returned array + is itself dynamically allocated and should also be freed when it is + no longer needed. Otherwise, the chunks array must be of at least + n_elements in length. It is filled in with the pointers to the + chunks. + + In either case, independent_calloc returns this pointer array, or + null if the allocation failed. If n_elements is zero and "chunks" + is null, it returns a chunk representing an array with zero elements + (which should be freed if not wanted). + + Each element must be freed when it is no longer needed. This can be + done all at once using bulk_free. + + independent_calloc simplifies and speeds up implementations of many + kinds of pools. It may also be useful when constructing large data + structures that initially have a fixed number of fixed-sized nodes, + but the number is not known at compile time, and some of the nodes + may later need to be freed. For example: + + struct Node { int item; struct Node* next; }; + + struct Node* build_list() { + struct Node** pool; + int n = read_number_of_nodes_needed(); + if (n <= 0) return 0; + pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); + if (pool == 0) die(); + // organize into a linked list... + struct Node* first = pool[0]; + for (i = 0; i < n-1; ++i) + pool[i]->next = pool[i+1]; + free(pool); // Can now free the array (or not, if it is needed later) + return first; + } +*/ +DLMALLOC_EXPORT void** dlindependent_calloc(size_t, size_t, void**); + +/* + independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]); + + independent_comalloc allocates, all at once, a set of n_elements + chunks with sizes indicated in the "sizes" array. It returns + an array of pointers to these elements, each of which can be + independently freed, realloc'ed etc. The elements are guaranteed to + be adjacently allocated (this is not guaranteed to occur with + multiple callocs or mallocs), which may also improve cache locality + in some applications. + + The "chunks" argument is optional (i.e., may be null). If it is null + the returned array is itself dynamically allocated and should also + be freed when it is no longer needed. Otherwise, the chunks array + must be of at least n_elements in length. It is filled in with the + pointers to the chunks. + + In either case, independent_comalloc returns this pointer array, or + null if the allocation failed. If n_elements is zero and chunks is + null, it returns a chunk representing an array with zero elements + (which should be freed if not wanted). + + Each element must be freed when it is no longer needed. This can be + done all at once using bulk_free. + + independent_comallac differs from independent_calloc in that each + element may have a different size, and also that it does not + automatically clear elements. + + independent_comalloc can be used to speed up allocation in cases + where several structs or objects must always be allocated at the + same time. For example: + + struct Head { ... } + struct Foot { ... } + + void send_message(char* msg) { + int msglen = strlen(msg); + size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) }; + void* chunks[3]; + if (independent_comalloc(3, sizes, chunks) == 0) + die(); + struct Head* head = (struct Head*)(chunks[0]); + char* body = (char*)(chunks[1]); + struct Foot* foot = (struct Foot*)(chunks[2]); + // ... + } + + In general though, independent_comalloc is worth using only for + larger values of n_elements. For small values, you probably won't + detect enough difference from series of malloc calls to bother. + + Overuse of independent_comalloc can increase overall memory usage, + since it cannot reuse existing noncontiguous small chunks that + might be available for some of the elements. +*/ +DLMALLOC_EXPORT void** dlindependent_comalloc(size_t, size_t*, void**); + +/* + bulk_free(void* array[], size_t n_elements) + Frees and clears (sets to null) each non-null pointer in the given + array. This is likely to be faster than freeing them one-by-one. + If footers are used, pointers that have been allocated in different + mspaces are not freed or cleared, and the count of all such pointers + is returned. For large arrays of pointers with poor locality, it + may be worthwhile to sort this array before calling bulk_free. +*/ +DLMALLOC_EXPORT size_t dlbulk_free(void**, size_t n_elements); + +/* + pvalloc(size_t n); + Equivalent to valloc(minimum-page-that-holds(n)), that is, + round up n to nearest pagesize. + */ +DLMALLOC_EXPORT void* dlpvalloc(size_t); + +/* + malloc_trim(size_t pad); + + If possible, gives memory back to the system (via negative arguments + to sbrk) if there is unused memory at the `high' end of the malloc + pool or in unused MMAP segments. You can call this after freeing + large blocks of memory to potentially reduce the system-level memory + requirements of a program. However, it cannot guarantee to reduce + memory. Under some allocation patterns, some large free blocks of + memory will be locked between two used chunks, so they cannot be + given back to the system. + + The `pad' argument to malloc_trim represents the amount of free + trailing space to leave untrimmed. If this argument is zero, only + the minimum amount of memory to maintain internal data structures + will be left. Non-zero arguments can be supplied to maintain enough + trailing space to service future expected allocations without having + to re-obtain memory from the system. + + Malloc_trim returns 1 if it actually released any memory, else 0. +*/ +DLMALLOC_EXPORT int dlmalloc_trim(size_t); + +/* + malloc_stats(); + Prints on stderr the amount of space obtained from the system (both + via sbrk and mmap), the maximum amount (which may be more than + current if malloc_trim and/or munmap got called), and the current + number of bytes allocated via malloc (or realloc, etc) but not yet + freed. Note that this is the number of bytes allocated, not the + number requested. It will be larger than the number requested + because of alignment and bookkeeping overhead. Because it includes + alignment wastage as being in use, this figure may be greater than + zero even when no user-level chunks are allocated. + + The reported current and maximum system memory can be inaccurate if + a program makes other calls to system memory allocation functions + (normally sbrk) outside of malloc. + + malloc_stats prints only the most commonly interesting statistics. + More information can be obtained by calling mallinfo. +*/ +DLMALLOC_EXPORT void dlmalloc_stats(void); + +/* + malloc_usable_size(void* p); + + Returns the number of bytes you can actually use in + an allocated chunk, which may be more than you requested (although + often not) due to alignment and minimum size constraints. + You can use this many bytes without worrying about + overwriting other allocated objects. This is not a particularly great + programming practice. malloc_usable_size can be more useful in + debugging and assertions, for example: + + p = malloc(n); + assert(malloc_usable_size(p) >= 256); +*/ +size_t dlmalloc_usable_size(void*); + +#endif /* ONLY_MSPACES */ + +#if MSPACES + +/* + mspace is an opaque type representing an independent + region of space that supports mspace_malloc, etc. +*/ +typedef void* mspace; + +/* + create_mspace creates and returns a new independent space with the + given initial capacity, or, if 0, the default granularity size. It + returns null if there is no system memory available to create the + space. If argument locked is non-zero, the space uses a separate + lock to control access. The capacity of the space will grow + dynamically as needed to service mspace_malloc requests. You can + control the sizes of incremental increases of this space by + compiling with a different DEFAULT_GRANULARITY or dynamically + setting with mallopt(M_GRANULARITY, value). +*/ +DLMALLOC_EXPORT mspace create_mspace(size_t capacity, int locked); + +/* + destroy_mspace destroys the given space, and attempts to return all + of its memory back to the system, returning the total number of + bytes freed. After destruction, the results of access to all memory + used by the space become undefined. +*/ +DLMALLOC_EXPORT size_t destroy_mspace(mspace msp); + +/* + create_mspace_with_base uses the memory supplied as the initial base + of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this + space is used for bookkeeping, so the capacity must be at least this + large. (Otherwise 0 is returned.) When this initial space is + exhausted, additional memory will be obtained from the system. + Destroying this space will deallocate all additionally allocated + space (if possible) but not the initial base. +*/ +DLMALLOC_EXPORT mspace create_mspace_with_base(void* base, size_t capacity, int locked); + +/* + mspace_track_large_chunks controls whether requests for large chunks + are allocated in their own untracked mmapped regions, separate from + others in this mspace. By default large chunks are not tracked, + which reduces fragmentation. However, such chunks are not + necessarily released to the system upon destroy_mspace. Enabling + tracking by setting to true may increase fragmentation, but avoids + leakage when relying on destroy_mspace to release all memory + allocated using this space. The function returns the previous + setting. +*/ +DLMALLOC_EXPORT int mspace_track_large_chunks(mspace msp, int enable); + + +/* + mspace_malloc behaves as malloc, but operates within + the given space. +*/ +DLMALLOC_EXPORT void* mspace_malloc(mspace msp, size_t bytes); + +/* + mspace_free behaves as free, but operates within + the given space. + + If compiled with FOOTERS==1, mspace_free is not actually needed. + free may be called instead of mspace_free because freed chunks from + any space are handled by their originating spaces. +*/ +DLMALLOC_EXPORT void mspace_free(mspace msp, void* mem); + +/* + mspace_realloc behaves as realloc, but operates within + the given space. + + If compiled with FOOTERS==1, mspace_realloc is not actually + needed. realloc may be called instead of mspace_realloc because + realloced chunks from any space are handled by their originating + spaces. +*/ +DLMALLOC_EXPORT void* mspace_realloc(mspace msp, void* mem, size_t newsize); + +/* + mspace_calloc behaves as calloc, but operates within + the given space. +*/ +DLMALLOC_EXPORT void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size); + +/* + mspace_memalign behaves as memalign, but operates within + the given space. +*/ +DLMALLOC_EXPORT void* mspace_memalign(mspace msp, size_t alignment, size_t bytes); + +/* + mspace_independent_calloc behaves as independent_calloc, but + operates within the given space. +*/ +DLMALLOC_EXPORT void** mspace_independent_calloc(mspace msp, size_t n_elements, + size_t elem_size, void* chunks[]); + +/* + mspace_independent_comalloc behaves as independent_comalloc, but + operates within the given space. +*/ +DLMALLOC_EXPORT void** mspace_independent_comalloc(mspace msp, size_t n_elements, + size_t sizes[], void* chunks[]); + +/* + mspace_footprint() returns the number of bytes obtained from the + system for this space. +*/ +DLMALLOC_EXPORT size_t mspace_footprint(mspace msp); + +/* + mspace_max_footprint() returns the peak number of bytes obtained from the + system for this space. +*/ +DLMALLOC_EXPORT size_t mspace_max_footprint(mspace msp); + + +#if !NO_MALLINFO +/* + mspace_mallinfo behaves as mallinfo, but reports properties of + the given space. +*/ +DLMALLOC_EXPORT struct mallinfo mspace_mallinfo(mspace msp); +#endif /* NO_MALLINFO */ + +/* + malloc_usable_size(void* p) behaves the same as malloc_usable_size; +*/ +DLMALLOC_EXPORT size_t mspace_usable_size(const void* mem); + +/* + mspace_malloc_stats behaves as malloc_stats, but reports + properties of the given space. +*/ +DLMALLOC_EXPORT void mspace_malloc_stats(mspace msp); + +/* + mspace_trim behaves as malloc_trim, but + operates within the given space. +*/ +DLMALLOC_EXPORT int mspace_trim(mspace msp, size_t pad); + +/* + An alias for mallopt. +*/ +DLMALLOC_EXPORT int mspace_mallopt(int, int); + +#endif /* MSPACES */ + +#ifdef __cplusplus +} /* end of extern "C" */ +#endif /* __cplusplus */ + +/* + ======================================================================== + To make a fully customizable malloc.h header file, cut everything + above this line, put into file malloc.h, edit to suit, and #include it + on the next line, as well as in programs that use this malloc. + ======================================================================== +*/ + +/* #include "malloc.h" */ + +/*------------------------------ internal #includes ---------------------- */ + +#ifdef _MSC_VER +#pragma warning( disable : 4146 ) /* no "unsigned" warnings */ +#endif /* _MSC_VER */ +#if !NO_MALLOC_STATS +#include /* for printing in malloc_stats */ +#endif /* NO_MALLOC_STATS */ +#ifndef LACKS_ERRNO_H +#include /* for MALLOC_FAILURE_ACTION */ +#endif /* LACKS_ERRNO_H */ +#ifdef DEBUG +#if ABORT_ON_ASSERT_FAILURE +#undef assert +#define assert(x) if(!(x)) ABORT +#else /* ABORT_ON_ASSERT_FAILURE */ +#include +#endif /* ABORT_ON_ASSERT_FAILURE */ +#else /* DEBUG */ +#ifndef assert +#define assert(x) +#endif +#define DEBUG 0 +#endif /* DEBUG */ +#if !defined(WIN32) && !defined(LACKS_TIME_H) +#include /* for magic initialization */ +#endif /* WIN32 */ +#ifndef LACKS_STDLIB_H +#include /* for abort() */ +#endif /* LACKS_STDLIB_H */ +#ifndef LACKS_STRING_H +#include /* for memset etc */ +#endif /* LACKS_STRING_H */ +#if USE_BUILTIN_FFS +#ifndef LACKS_STRINGS_H +#include /* for ffs */ +#endif /* LACKS_STRINGS_H */ +#endif /* USE_BUILTIN_FFS */ +#if HAVE_MMAP +#ifndef LACKS_SYS_MMAN_H +/* On some versions of linux, mremap decl in mman.h needs __USE_GNU set */ +#if (defined(linux) && !defined(__USE_GNU)) +#define __USE_GNU 1 +#include /* for mmap */ +#undef __USE_GNU +#else +#include /* for mmap */ +#endif /* linux */ +#endif /* LACKS_SYS_MMAN_H */ +#ifndef LACKS_FCNTL_H +#include +#endif /* LACKS_FCNTL_H */ +#endif /* HAVE_MMAP */ +#ifndef LACKS_UNISTD_H +#include /* for sbrk, sysconf */ +#else /* LACKS_UNISTD_H */ +#if !defined(__FreeBSD__) && !defined(__OpenBSD__) && !defined(__NetBSD__) +extern void* sbrk(ptrdiff_t); +#endif /* FreeBSD etc */ +#endif /* LACKS_UNISTD_H */ + +/* Declarations for locking */ +#if USE_LOCKS +#ifndef WIN32 +#if defined (__SVR4) && defined (__sun) /* solaris */ +#include +#elif !defined(LACKS_SCHED_H) +#include +#endif /* solaris or LACKS_SCHED_H */ +#if (defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0) || !USE_SPIN_LOCKS +#include +#endif /* USE_RECURSIVE_LOCKS ... */ +#elif defined(_MSC_VER) +#ifndef _M_AMD64 +/* These are already defined on AMD64 builds */ +#ifdef __cplusplus +extern "C" { +#endif /* __cplusplus */ +LONG __cdecl _InterlockedCompareExchange(LONG volatile *Dest, LONG Exchange, LONG Comp); +LONG __cdecl _InterlockedExchange(LONG volatile *Target, LONG Value); +#ifdef __cplusplus +} +#endif /* __cplusplus */ +#endif /* _M_AMD64 */ +#pragma intrinsic (_InterlockedCompareExchange) +#pragma intrinsic (_InterlockedExchange) +#define interlockedcompareexchange _InterlockedCompareExchange +#define interlockedexchange _InterlockedExchange +#elif defined(WIN32) && defined(__GNUC__) +#define interlockedcompareexchange(a, b, c) __sync_val_compare_and_swap(a, c, b) +#define interlockedexchange __sync_lock_test_and_set +#endif /* Win32 */ +#else /* USE_LOCKS */ +#endif /* USE_LOCKS */ + +#ifndef LOCK_AT_FORK +#define LOCK_AT_FORK 0 +#endif + +/* Declarations for bit scanning on win32 */ +#if defined(_MSC_VER) && _MSC_VER>=1300 +#ifndef BitScanForward /* Try to avoid pulling in WinNT.h */ +#ifdef __cplusplus +extern "C" { +#endif /* __cplusplus */ +unsigned char _BitScanForward(unsigned long *index, unsigned long mask); +unsigned char _BitScanReverse(unsigned long *index, unsigned long mask); +#ifdef __cplusplus +} +#endif /* __cplusplus */ + +#define BitScanForward _BitScanForward +#define BitScanReverse _BitScanReverse +#pragma intrinsic(_BitScanForward) +#pragma intrinsic(_BitScanReverse) +#endif /* BitScanForward */ +#endif /* defined(_MSC_VER) && _MSC_VER>=1300 */ + +#ifndef WIN32 +#ifndef malloc_getpagesize +# ifdef _SC_PAGESIZE /* some SVR4 systems omit an underscore */ +# ifndef _SC_PAGE_SIZE +# define _SC_PAGE_SIZE _SC_PAGESIZE +# endif +# endif +# ifdef _SC_PAGE_SIZE +# define malloc_getpagesize sysconf(_SC_PAGE_SIZE) +# else +# if defined(BSD) || defined(DGUX) || defined(HAVE_GETPAGESIZE) + extern size_t getpagesize(); +# define malloc_getpagesize getpagesize() +# else +# ifdef WIN32 /* use supplied emulation of getpagesize */ +# define malloc_getpagesize getpagesize() +# else +# ifndef LACKS_SYS_PARAM_H +# include +# endif +# ifdef EXEC_PAGESIZE +# define malloc_getpagesize EXEC_PAGESIZE +# else +# ifdef NBPG +# ifndef CLSIZE +# define malloc_getpagesize NBPG +# else +# define malloc_getpagesize (NBPG * CLSIZE) +# endif +# else +# ifdef NBPC +# define malloc_getpagesize NBPC +# else +# ifdef PAGESIZE +# define malloc_getpagesize PAGESIZE +# else /* just guess */ +# define malloc_getpagesize ((size_t)4096U) +# endif +# endif +# endif +# endif +# endif +# endif +# endif +#endif +#endif + +/* ------------------- size_t and alignment properties -------------------- */ + +/* The byte and bit size of a size_t */ +#define SIZE_T_SIZE (sizeof(size_t)) +#define SIZE_T_BITSIZE (sizeof(size_t) << 3) + +/* Some constants coerced to size_t */ +/* Annoying but necessary to avoid errors on some platforms */ +#define SIZE_T_ZERO ((size_t)0) +#define SIZE_T_ONE ((size_t)1) +#define SIZE_T_TWO ((size_t)2) +#define SIZE_T_FOUR ((size_t)4) +#define TWO_SIZE_T_SIZES (SIZE_T_SIZE<<1) +#define FOUR_SIZE_T_SIZES (SIZE_T_SIZE<<2) +#define SIX_SIZE_T_SIZES (FOUR_SIZE_T_SIZES+TWO_SIZE_T_SIZES) +#define HALF_MAX_SIZE_T (MAX_SIZE_T / 2U) + +/* The bit mask value corresponding to MALLOC_ALIGNMENT */ +#define CHUNK_ALIGN_MASK (MALLOC_ALIGNMENT - SIZE_T_ONE) + +/* True if address a has acceptable alignment */ +#define is_aligned(A) (((size_t)((A)) & (CHUNK_ALIGN_MASK)) == 0) + +/* the number of bytes to offset an address to align it */ +#define align_offset(A)\ + ((((size_t)(A) & CHUNK_ALIGN_MASK) == 0)? 0 :\ + ((MALLOC_ALIGNMENT - ((size_t)(A) & CHUNK_ALIGN_MASK)) & CHUNK_ALIGN_MASK)) + +/* -------------------------- MMAP preliminaries ------------------------- */ + +/* + If HAVE_MORECORE or HAVE_MMAP are false, we just define calls and + checks to fail so compiler optimizer can delete code rather than + using so many "#if"s. +*/ + + +/* MORECORE and MMAP must return MFAIL on failure */ +#define MFAIL ((void*)(MAX_SIZE_T)) +#define CMFAIL ((char*)(MFAIL)) /* defined for convenience */ + +#if HAVE_MMAP + +#ifndef WIN32 +#define MUNMAP_DEFAULT(a, s) munmap((a), (s)) +#define MMAP_PROT (PROT_READ|PROT_WRITE) +#if !defined(MAP_ANONYMOUS) && defined(MAP_ANON) +#define MAP_ANONYMOUS MAP_ANON +#endif /* MAP_ANON */ +#ifdef MAP_ANONYMOUS +#define MMAP_FLAGS (MAP_PRIVATE|MAP_ANONYMOUS) +#define MMAP_DEFAULT(s) mmap(0, (s), MMAP_PROT, MMAP_FLAGS, -1, 0) +#else /* MAP_ANONYMOUS */ +/* + Nearly all versions of mmap support MAP_ANONYMOUS, so the following + is unlikely to be needed, but is supplied just in case. +*/ +#define MMAP_FLAGS (MAP_PRIVATE) +static int dev_zero_fd = -1; /* Cached file descriptor for /dev/zero. */ +#define MMAP_DEFAULT(s) ((dev_zero_fd < 0) ? \ + (dev_zero_fd = open("/dev/zero", O_RDWR), \ + mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) : \ + mmap(0, (s), MMAP_PROT, MMAP_FLAGS, dev_zero_fd, 0)) +#endif /* MAP_ANONYMOUS */ + +#define DIRECT_MMAP_DEFAULT(s) MMAP_DEFAULT(s) + +#else /* WIN32 */ + +/* Win32 MMAP via VirtualAlloc */ +static FORCEINLINE void* win32mmap(size_t size) { + void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT, PAGE_READWRITE); + return (ptr != 0)? ptr: MFAIL; +} + +/* For direct MMAP, use MEM_TOP_DOWN to minimize interference */ +static FORCEINLINE void* win32direct_mmap(size_t size) { + void* ptr = VirtualAlloc(0, size, MEM_RESERVE|MEM_COMMIT|MEM_TOP_DOWN, + PAGE_READWRITE); + return (ptr != 0)? ptr: MFAIL; +} + +/* This function supports releasing coalesed segments */ +static FORCEINLINE int win32munmap(void* ptr, size_t size) { + MEMORY_BASIC_INFORMATION minfo; + char* cptr = (char*)ptr; + while (size) { + if (VirtualQuery(cptr, &minfo, sizeof(minfo)) == 0) + return -1; + if (minfo.BaseAddress != cptr || minfo.AllocationBase != cptr || + minfo.State != MEM_COMMIT || minfo.RegionSize > size) + return -1; + if (VirtualFree(cptr, 0, MEM_RELEASE) == 0) + return -1; + cptr += minfo.RegionSize; + size -= minfo.RegionSize; + } + return 0; +} + +#define MMAP_DEFAULT(s) win32mmap(s) +#define MUNMAP_DEFAULT(a, s) win32munmap((a), (s)) +#define DIRECT_MMAP_DEFAULT(s) win32direct_mmap(s) +#endif /* WIN32 */ +#endif /* HAVE_MMAP */ + +#if HAVE_MREMAP +#ifndef WIN32 +#define MREMAP_DEFAULT(addr, osz, nsz, mv) mremap((addr), (osz), (nsz), (mv)) +#endif /* WIN32 */ +#endif /* HAVE_MREMAP */ + +/** + * Define CALL_MORECORE + */ +#if HAVE_MORECORE + #ifdef MORECORE + #define CALL_MORECORE(S) MORECORE(S) + #else /* MORECORE */ + #define CALL_MORECORE(S) MORECORE_DEFAULT(S) + #endif /* MORECORE */ +#else /* HAVE_MORECORE */ + #define CALL_MORECORE(S) MFAIL +#endif /* HAVE_MORECORE */ + +/** + * Define CALL_MMAP/CALL_MUNMAP/CALL_DIRECT_MMAP + */ +#if HAVE_MMAP + #define USE_MMAP_BIT (SIZE_T_ONE) + + #ifdef MMAP + #define CALL_MMAP(s) MMAP(s) + #else /* MMAP */ + #define CALL_MMAP(s) MMAP_DEFAULT(s) + #endif /* MMAP */ + #ifdef MUNMAP + #define CALL_MUNMAP(a, s) MUNMAP((a), (s)) + #else /* MUNMAP */ + #define CALL_MUNMAP(a, s) MUNMAP_DEFAULT((a), (s)) + #endif /* MUNMAP */ + #ifdef DIRECT_MMAP + #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s) + #else /* DIRECT_MMAP */ + #define CALL_DIRECT_MMAP(s) DIRECT_MMAP_DEFAULT(s) + #endif /* DIRECT_MMAP */ +#else /* HAVE_MMAP */ + #define USE_MMAP_BIT (SIZE_T_ZERO) + + #define MMAP(s) MFAIL + #define MUNMAP(a, s) (-1) + #define DIRECT_MMAP(s) MFAIL + #define CALL_DIRECT_MMAP(s) DIRECT_MMAP(s) + #define CALL_MMAP(s) MMAP(s) + #define CALL_MUNMAP(a, s) MUNMAP((a), (s)) +#endif /* HAVE_MMAP */ + +/** + * Define CALL_MREMAP + */ +#if HAVE_MMAP && HAVE_MREMAP + #ifdef MREMAP + #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP((addr), (osz), (nsz), (mv)) + #else /* MREMAP */ + #define CALL_MREMAP(addr, osz, nsz, mv) MREMAP_DEFAULT((addr), (osz), (nsz), (mv)) + #endif /* MREMAP */ +#else /* HAVE_MMAP && HAVE_MREMAP */ + #define CALL_MREMAP(addr, osz, nsz, mv) MFAIL +#endif /* HAVE_MMAP && HAVE_MREMAP */ + +/* mstate bit set if continguous morecore disabled or failed */ +#define USE_NONCONTIGUOUS_BIT (4U) + +/* segment bit set in create_mspace_with_base */ +#define EXTERN_BIT (8U) + + +/* --------------------------- Lock preliminaries ------------------------ */ + +/* + When locks are defined, there is one global lock, plus + one per-mspace lock. + + The global lock_ensures that mparams.magic and other unique + mparams values are initialized only once. It also protects + sequences of calls to MORECORE. In many cases sys_alloc requires + two calls, that should not be interleaved with calls by other + threads. This does not protect against direct calls to MORECORE + by other threads not using this lock, so there is still code to + cope the best we can on interference. + + Per-mspace locks surround calls to malloc, free, etc. + By default, locks are simple non-reentrant mutexes. + + Because lock-protected regions generally have bounded times, it is + OK to use the supplied simple spinlocks. Spinlocks are likely to + improve performance for lightly contended applications, but worsen + performance under heavy contention. + + If USE_LOCKS is > 1, the definitions of lock routines here are + bypassed, in which case you will need to define the type MLOCK_T, + and at least INITIAL_LOCK, DESTROY_LOCK, ACQUIRE_LOCK, RELEASE_LOCK + and TRY_LOCK. You must also declare a + static MLOCK_T malloc_global_mutex = { initialization values };. + +*/ + +#if !USE_LOCKS +#define USE_LOCK_BIT (0U) +#define INITIAL_LOCK(l) (0) +#define DESTROY_LOCK(l) (0) +#define ACQUIRE_MALLOC_GLOBAL_LOCK() +#define RELEASE_MALLOC_GLOBAL_LOCK() + +#else +#if USE_LOCKS > 1 +/* ----------------------- User-defined locks ------------------------ */ +/* Define your own lock implementation here */ +/* #define INITIAL_LOCK(lk) ... */ +/* #define DESTROY_LOCK(lk) ... */ +/* #define ACQUIRE_LOCK(lk) ... */ +/* #define RELEASE_LOCK(lk) ... */ +/* #define TRY_LOCK(lk) ... */ +/* static MLOCK_T malloc_global_mutex = ... */ + +#elif USE_SPIN_LOCKS + +/* First, define CAS_LOCK and CLEAR_LOCK on ints */ +/* Note CAS_LOCK defined to return 0 on success */ + +#if defined(__GNUC__)&& (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 1)) +#define CAS_LOCK(sl) __sync_lock_test_and_set(sl, 1) +#define CLEAR_LOCK(sl) __sync_lock_release(sl) + +#elif (defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__))) +/* Custom spin locks for older gcc on x86 */ +static FORCEINLINE int x86_cas_lock(int *sl) { + int ret; + int val = 1; + int cmp = 0; + __asm__ __volatile__ ("lock; cmpxchgl %1, %2" + : "=a" (ret) + : "r" (val), "m" (*(sl)), "0"(cmp) + : "memory", "cc"); + return ret; +} + +static FORCEINLINE void x86_clear_lock(int* sl) { + assert(*sl != 0); + int prev = 0; + int ret; + __asm__ __volatile__ ("lock; xchgl %0, %1" + : "=r" (ret) + : "m" (*(sl)), "0"(prev) + : "memory"); +} + +#define CAS_LOCK(sl) x86_cas_lock(sl) +#define CLEAR_LOCK(sl) x86_clear_lock(sl) + +#else /* Win32 MSC */ +#define CAS_LOCK(sl) interlockedexchange(sl, (LONG)1) +#define CLEAR_LOCK(sl) interlockedexchange (sl, (LONG)0) + +#endif /* ... gcc spins locks ... */ + +/* How to yield for a spin lock */ +#define SPINS_PER_YIELD 63 +#if defined(_MSC_VER) +#define SLEEP_EX_DURATION 50 /* delay for yield/sleep */ +#define SPIN_LOCK_YIELD SleepEx(SLEEP_EX_DURATION, FALSE) +#elif defined (__SVR4) && defined (__sun) /* solaris */ +#define SPIN_LOCK_YIELD thr_yield(); +#elif !defined(LACKS_SCHED_H) +#define SPIN_LOCK_YIELD sched_yield(); +#else +#define SPIN_LOCK_YIELD +#endif /* ... yield ... */ + +#if !defined(USE_RECURSIVE_LOCKS) || USE_RECURSIVE_LOCKS == 0 +/* Plain spin locks use single word (embedded in malloc_states) */ +static int spin_acquire_lock(int *sl) { + int spins = 0; + while (*(volatile int *)sl != 0 || CAS_LOCK(sl)) { + if ((++spins & SPINS_PER_YIELD) == 0) { + SPIN_LOCK_YIELD; + } + } + return 0; +} + +#define MLOCK_T int +#define TRY_LOCK(sl) !CAS_LOCK(sl) +#define RELEASE_LOCK(sl) CLEAR_LOCK(sl) +#define ACQUIRE_LOCK(sl) (CAS_LOCK(sl)? spin_acquire_lock(sl) : 0) +#define INITIAL_LOCK(sl) (*sl = 0) +#define DESTROY_LOCK(sl) (0) +static MLOCK_T malloc_global_mutex = 0; + +#else /* USE_RECURSIVE_LOCKS */ +/* types for lock owners */ +#ifdef WIN32 +#define THREAD_ID_T DWORD +#define CURRENT_THREAD GetCurrentThreadId() +#define EQ_OWNER(X,Y) ((X) == (Y)) +#else +/* + Note: the following assume that pthread_t is a type that can be + initialized to (casted) zero. If this is not the case, you will need to + somehow redefine these or not use spin locks. +*/ +#define THREAD_ID_T pthread_t +#define CURRENT_THREAD pthread_self() +#define EQ_OWNER(X,Y) pthread_equal(X, Y) +#endif + +struct malloc_recursive_lock { + int sl; + unsigned int c; + THREAD_ID_T threadid; +}; + +#define MLOCK_T struct malloc_recursive_lock +static MLOCK_T malloc_global_mutex = { 0, 0, (THREAD_ID_T)0}; + +static FORCEINLINE void recursive_release_lock(MLOCK_T *lk) { + assert(lk->sl != 0); + if (--lk->c == 0) { + CLEAR_LOCK(&lk->sl); + } +} + +static FORCEINLINE int recursive_acquire_lock(MLOCK_T *lk) { + THREAD_ID_T mythreadid = CURRENT_THREAD; + int spins = 0; + for (;;) { + if (*((volatile int *)(&lk->sl)) == 0) { + if (!CAS_LOCK(&lk->sl)) { + lk->threadid = mythreadid; + lk->c = 1; + return 0; + } + } + else if (EQ_OWNER(lk->threadid, mythreadid)) { + ++lk->c; + return 0; + } + if ((++spins & SPINS_PER_YIELD) == 0) { + SPIN_LOCK_YIELD; + } + } +} + +static FORCEINLINE int recursive_try_lock(MLOCK_T *lk) { + THREAD_ID_T mythreadid = CURRENT_THREAD; + if (*((volatile int *)(&lk->sl)) == 0) { + if (!CAS_LOCK(&lk->sl)) { + lk->threadid = mythreadid; + lk->c = 1; + return 1; + } + } + else if (EQ_OWNER(lk->threadid, mythreadid)) { + ++lk->c; + return 1; + } + return 0; +} + +#define RELEASE_LOCK(lk) recursive_release_lock(lk) +#define TRY_LOCK(lk) recursive_try_lock(lk) +#define ACQUIRE_LOCK(lk) recursive_acquire_lock(lk) +#define INITIAL_LOCK(lk) ((lk)->threadid = (THREAD_ID_T)0, (lk)->sl = 0, (lk)->c = 0) +#define DESTROY_LOCK(lk) (0) +#endif /* USE_RECURSIVE_LOCKS */ + +#elif defined(WIN32) /* Win32 critical sections */ +#define MLOCK_T CRITICAL_SECTION +#define ACQUIRE_LOCK(lk) (EnterCriticalSection(lk), 0) +#define RELEASE_LOCK(lk) LeaveCriticalSection(lk) +#define TRY_LOCK(lk) TryEnterCriticalSection(lk) +#define INITIAL_LOCK(lk) (!InitializeCriticalSectionAndSpinCount((lk), 0x80000000|4000)) +#define DESTROY_LOCK(lk) (DeleteCriticalSection(lk), 0) +#define NEED_GLOBAL_LOCK_INIT + +static MLOCK_T malloc_global_mutex; +static volatile LONG malloc_global_mutex_status; + +/* Use spin loop to initialize global lock */ +static void init_malloc_global_mutex() { + for (;;) { + long stat = malloc_global_mutex_status; + if (stat > 0) + return; + /* transition to < 0 while initializing, then to > 0) */ + if (stat == 0 && + interlockedcompareexchange(&malloc_global_mutex_status, (LONG)-1, (LONG)0) == 0) { + InitializeCriticalSection(&malloc_global_mutex); + interlockedexchange(&malloc_global_mutex_status, (LONG)1); + return; + } + SleepEx(0, FALSE); + } +} + +#else /* pthreads-based locks */ +#define MLOCK_T pthread_mutex_t +#define ACQUIRE_LOCK(lk) pthread_mutex_lock(lk) +#define RELEASE_LOCK(lk) pthread_mutex_unlock(lk) +#define TRY_LOCK(lk) (!pthread_mutex_trylock(lk)) +#define INITIAL_LOCK(lk) pthread_init_lock(lk) +#define DESTROY_LOCK(lk) pthread_mutex_destroy(lk) + +#if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 && defined(linux) && !defined(PTHREAD_MUTEX_RECURSIVE) +/* Cope with old-style linux recursive lock initialization by adding */ +/* skipped internal declaration from pthread.h */ +extern int pthread_mutexattr_setkind_np __P ((pthread_mutexattr_t *__attr, + int __kind)); +#define PTHREAD_MUTEX_RECURSIVE PTHREAD_MUTEX_RECURSIVE_NP +#define pthread_mutexattr_settype(x,y) pthread_mutexattr_setkind_np(x,y) +#endif /* USE_RECURSIVE_LOCKS ... */ + +static MLOCK_T malloc_global_mutex = PTHREAD_MUTEX_INITIALIZER; + +static int pthread_init_lock (MLOCK_T *lk) { + pthread_mutexattr_t attr; + if (pthread_mutexattr_init(&attr)) return 1; +#if defined(USE_RECURSIVE_LOCKS) && USE_RECURSIVE_LOCKS != 0 + if (pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE)) return 1; +#endif + if (pthread_mutex_init(lk, &attr)) return 1; + if (pthread_mutexattr_destroy(&attr)) return 1; + return 0; +} + +#endif /* ... lock types ... */ + +/* Common code for all lock types */ +#define USE_LOCK_BIT (2U) + +#ifndef ACQUIRE_MALLOC_GLOBAL_LOCK +#define ACQUIRE_MALLOC_GLOBAL_LOCK() ACQUIRE_LOCK(&malloc_global_mutex); +#endif + +#ifndef RELEASE_MALLOC_GLOBAL_LOCK +#define RELEASE_MALLOC_GLOBAL_LOCK() RELEASE_LOCK(&malloc_global_mutex); +#endif + +#endif /* USE_LOCKS */ + +/* ----------------------- Chunk representations ------------------------ */ + +/* + (The following includes lightly edited explanations by Colin Plumb.) + + The malloc_chunk declaration below is misleading (but accurate and + necessary). It declares a "view" into memory allowing access to + necessary fields at known offsets from a given base. + + Chunks of memory are maintained using a `boundary tag' method as + originally described by Knuth. (See the paper by Paul Wilson + ftp://ftp.cs.utexas.edu/pub/garbage/allocsrv.ps for a survey of such + techniques.) Sizes of free chunks are stored both in the front of + each chunk and at the end. This makes consolidating fragmented + chunks into bigger chunks fast. The head fields also hold bits + representing whether chunks are free or in use. + + Here are some pictures to make it clearer. They are "exploded" to + show that the state of a chunk can be thought of as extending from + the high 31 bits of the head field of its header through the + prev_foot and PINUSE_BIT bit of the following chunk header. + + A chunk that's in use looks like: + + chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Size of previous chunk (if P = 0) | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| + | Size of this chunk 1| +-+ + mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | | + +- -+ + | | + +- -+ + | : + +- size - sizeof(size_t) available payload bytes -+ + : | + chunk-> +- -+ + | | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |1| + | Size of next chunk (may or may not be in use) | +-+ + mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + + And if it's free, it looks like this: + + chunk-> +- -+ + | User payload (must be in use, or we would have merged!) | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |P| + | Size of this chunk 0| +-+ + mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Next pointer | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Prev pointer | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | : + +- size - sizeof(struct chunk) unused bytes -+ + : | + chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Size of this chunk | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ |0| + | Size of next chunk (must be in use, or we would have merged)| +-+ + mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | : + +- User payload -+ + : | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + |0| + +-+ + Note that since we always merge adjacent free chunks, the chunks + adjacent to a free chunk must be in use. + + Given a pointer to a chunk (which can be derived trivially from the + payload pointer) we can, in O(1) time, find out whether the adjacent + chunks are free, and if so, unlink them from the lists that they + are on and merge them with the current chunk. + + Chunks always begin on even word boundaries, so the mem portion + (which is returned to the user) is also on an even word boundary, and + thus at least double-word aligned. + + The P (PINUSE_BIT) bit, stored in the unused low-order bit of the + chunk size (which is always a multiple of two words), is an in-use + bit for the *previous* chunk. If that bit is *clear*, then the + word before the current chunk size contains the previous chunk + size, and can be used to find the front of the previous chunk. + The very first chunk allocated always has this bit set, preventing + access to non-existent (or non-owned) memory. If pinuse is set for + any given chunk, then you CANNOT determine the size of the + previous chunk, and might even get a memory addressing fault when + trying to do so. + + The C (CINUSE_BIT) bit, stored in the unused second-lowest bit of + the chunk size redundantly records whether the current chunk is + inuse (unless the chunk is mmapped). This redundancy enables usage + checks within free and realloc, and reduces indirection when freeing + and consolidating chunks. + + Each freshly allocated chunk must have both cinuse and pinuse set. + That is, each allocated chunk borders either a previously allocated + and still in-use chunk, or the base of its memory arena. This is + ensured by making all allocations from the `lowest' part of any + found chunk. Further, no free chunk physically borders another one, + so each free chunk is known to be preceded and followed by either + inuse chunks or the ends of memory. + + Note that the `foot' of the current chunk is actually represented + as the prev_foot of the NEXT chunk. This makes it easier to + deal with alignments etc but can be very confusing when trying + to extend or adapt this code. + + The exceptions to all this are + + 1. The special chunk `top' is the top-most available chunk (i.e., + the one bordering the end of available memory). It is treated + specially. Top is never included in any bin, is used only if + no other chunk is available, and is released back to the + system if it is very large (see M_TRIM_THRESHOLD). In effect, + the top chunk is treated as larger (and thus less well + fitting) than any other available chunk. The top chunk + doesn't update its trailing size field since there is no next + contiguous chunk that would have to index off it. However, + space is still allocated for it (TOP_FOOT_SIZE) to enable + separation or merging when space is extended. + + 3. Chunks allocated via mmap, have both cinuse and pinuse bits + cleared in their head fields. Because they are allocated + one-by-one, each must carry its own prev_foot field, which is + also used to hold the offset this chunk has within its mmapped + region, which is needed to preserve alignment. Each mmapped + chunk is trailed by the first two fields of a fake next-chunk + for sake of usage checks. + +*/ + +struct malloc_chunk { + size_t prev_foot; /* Size of previous chunk (if free). */ + size_t head; /* Size and inuse bits. */ + struct malloc_chunk* fd; /* double links -- used only if free. */ + struct malloc_chunk* bk; +}; + +typedef struct malloc_chunk mchunk; +typedef struct malloc_chunk* mchunkptr; +typedef struct malloc_chunk* sbinptr; /* The type of bins of chunks */ +typedef unsigned int bindex_t; /* Described below */ +typedef unsigned int binmap_t; /* Described below */ +typedef unsigned int flag_t; /* The type of various bit flag sets */ + +/* ------------------- Chunks sizes and alignments ----------------------- */ + +#define MCHUNK_SIZE (sizeof(mchunk)) + +#if FOOTERS +#define CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) +#else /* FOOTERS */ +#define CHUNK_OVERHEAD (SIZE_T_SIZE) +#endif /* FOOTERS */ + +/* MMapped chunks need a second word of overhead ... */ +#define MMAP_CHUNK_OVERHEAD (TWO_SIZE_T_SIZES) +/* ... and additional padding for fake next-chunk at foot */ +#define MMAP_FOOT_PAD (FOUR_SIZE_T_SIZES) + +/* The smallest size we can malloc is an aligned minimal chunk */ +#define MIN_CHUNK_SIZE\ + ((MCHUNK_SIZE + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) + +/* conversion from malloc headers to user pointers, and back */ +#define chunk2mem(p) ((void*)((char*)(p) + TWO_SIZE_T_SIZES)) +#define mem2chunk(mem) ((mchunkptr)((char*)(mem) - TWO_SIZE_T_SIZES)) +/* chunk associated with aligned address A */ +#define align_as_chunk(A) (mchunkptr)((A) + align_offset(chunk2mem(A))) + +/* Bounds on request (not chunk) sizes. */ +#define MAX_REQUEST ((-MIN_CHUNK_SIZE) << 2) +#define MIN_REQUEST (MIN_CHUNK_SIZE - CHUNK_OVERHEAD - SIZE_T_ONE) + +/* pad request bytes into a usable size */ +#define pad_request(req) \ + (((req) + CHUNK_OVERHEAD + CHUNK_ALIGN_MASK) & ~CHUNK_ALIGN_MASK) + +/* pad request, checking for minimum (but not maximum) */ +#define request2size(req) \ + (((req) < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(req)) + + +/* ------------------ Operations on head and foot fields ----------------- */ + +/* + The head field of a chunk is or'ed with PINUSE_BIT when previous + adjacent chunk in use, and or'ed with CINUSE_BIT if this chunk is in + use, unless mmapped, in which case both bits are cleared. + + FLAG4_BIT is not used by this malloc, but might be useful in extensions. +*/ + +#define PINUSE_BIT (SIZE_T_ONE) +#define CINUSE_BIT (SIZE_T_TWO) +#define FLAG4_BIT (SIZE_T_FOUR) +#define INUSE_BITS (PINUSE_BIT|CINUSE_BIT) +#define FLAG_BITS (PINUSE_BIT|CINUSE_BIT|FLAG4_BIT) + +/* Head value for fenceposts */ +#define FENCEPOST_HEAD (INUSE_BITS|SIZE_T_SIZE) + +/* extraction of fields from head words */ +#define cinuse(p) ((p)->head & CINUSE_BIT) +#define pinuse(p) ((p)->head & PINUSE_BIT) +#define flag4inuse(p) ((p)->head & FLAG4_BIT) +#define is_inuse(p) (((p)->head & INUSE_BITS) != PINUSE_BIT) +#define is_mmapped(p) (((p)->head & INUSE_BITS) == 0) + +#define chunksize(p) ((p)->head & ~(FLAG_BITS)) + +#define clear_pinuse(p) ((p)->head &= ~PINUSE_BIT) +#define set_flag4(p) ((p)->head |= FLAG4_BIT) +#define clear_flag4(p) ((p)->head &= ~FLAG4_BIT) + +/* Treat space at ptr +/- offset as a chunk */ +#define chunk_plus_offset(p, s) ((mchunkptr)(((char*)(p)) + (s))) +#define chunk_minus_offset(p, s) ((mchunkptr)(((char*)(p)) - (s))) + +/* Ptr to next or previous physical malloc_chunk. */ +#define next_chunk(p) ((mchunkptr)( ((char*)(p)) + ((p)->head & ~FLAG_BITS))) +#define prev_chunk(p) ((mchunkptr)( ((char*)(p)) - ((p)->prev_foot) )) + +/* extract next chunk's pinuse bit */ +#define next_pinuse(p) ((next_chunk(p)->head) & PINUSE_BIT) + +/* Get/set size at footer */ +#define get_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot) +#define set_foot(p, s) (((mchunkptr)((char*)(p) + (s)))->prev_foot = (s)) + +/* Set size, pinuse bit, and foot */ +#define set_size_and_pinuse_of_free_chunk(p, s)\ + ((p)->head = (s|PINUSE_BIT), set_foot(p, s)) + +/* Set size, pinuse bit, foot, and clear next pinuse */ +#define set_free_with_pinuse(p, s, n)\ + (clear_pinuse(n), set_size_and_pinuse_of_free_chunk(p, s)) + +/* Get the internal overhead associated with chunk p */ +#define overhead_for(p)\ + (is_mmapped(p)? MMAP_CHUNK_OVERHEAD : CHUNK_OVERHEAD) + +/* Return true if malloced space is not necessarily cleared */ +#if MMAP_CLEARS +#define calloc_must_clear(p) (!is_mmapped(p)) +#else /* MMAP_CLEARS */ +#define calloc_must_clear(p) (1) +#endif /* MMAP_CLEARS */ + +/* ---------------------- Overlaid data structures ----------------------- */ + +/* + When chunks are not in use, they are treated as nodes of either + lists or trees. + + "Small" chunks are stored in circular doubly-linked lists, and look + like this: + + chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Size of previous chunk | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + `head:' | Size of chunk, in bytes |P| + mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Forward pointer to next chunk in list | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Back pointer to previous chunk in list | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Unused space (may be 0 bytes long) . + . . + . | +nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + `foot:' | Size of chunk, in bytes | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + + Larger chunks are kept in a form of bitwise digital trees (aka + tries) keyed on chunksizes. Because malloc_tree_chunks are only for + free chunks greater than 256 bytes, their size doesn't impose any + constraints on user chunk sizes. Each node looks like: + + chunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Size of previous chunk | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + `head:' | Size of chunk, in bytes |P| + mem-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Forward pointer to next chunk of same size | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Back pointer to previous chunk of same size | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Pointer to left child (child[0]) | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Pointer to right child (child[1]) | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Pointer to parent | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | bin index of this chunk | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + | Unused space . + . | +nextchunk-> +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + `foot:' | Size of chunk, in bytes | + +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ + + Each tree holding treenodes is a tree of unique chunk sizes. Chunks + of the same size are arranged in a circularly-linked list, with only + the oldest chunk (the next to be used, in our FIFO ordering) + actually in the tree. (Tree members are distinguished by a non-null + parent pointer.) If a chunk with the same size an an existing node + is inserted, it is linked off the existing node using pointers that + work in the same way as fd/bk pointers of small chunks. + + Each tree contains a power of 2 sized range of chunk sizes (the + smallest is 0x100 <= x < 0x180), which is is divided in half at each + tree level, with the chunks in the smaller half of the range (0x100 + <= x < 0x140 for the top nose) in the left subtree and the larger + half (0x140 <= x < 0x180) in the right subtree. This is, of course, + done by inspecting individual bits. + + Using these rules, each node's left subtree contains all smaller + sizes than its right subtree. However, the node at the root of each + subtree has no particular ordering relationship to either. (The + dividing line between the subtree sizes is based on trie relation.) + If we remove the last chunk of a given size from the interior of the + tree, we need to replace it with a leaf node. The tree ordering + rules permit a node to be replaced by any leaf below it. + + The smallest chunk in a tree (a common operation in a best-fit + allocator) can be found by walking a path to the leftmost leaf in + the tree. Unlike a usual binary tree, where we follow left child + pointers until we reach a null, here we follow the right child + pointer any time the left one is null, until we reach a leaf with + both child pointers null. The smallest chunk in the tree will be + somewhere along that path. + + The worst case number of steps to add, find, or remove a node is + bounded by the number of bits differentiating chunks within + bins. Under current bin calculations, this ranges from 6 up to 21 + (for 32 bit sizes) or up to 53 (for 64 bit sizes). The typical case + is of course much better. +*/ + +struct malloc_tree_chunk { + /* The first four fields must be compatible with malloc_chunk */ + size_t prev_foot; + size_t head; + struct malloc_tree_chunk* fd; + struct malloc_tree_chunk* bk; + + struct malloc_tree_chunk* child[2]; + struct malloc_tree_chunk* parent; + bindex_t index; +}; + +typedef struct malloc_tree_chunk tchunk; +typedef struct malloc_tree_chunk* tchunkptr; +typedef struct malloc_tree_chunk* tbinptr; /* The type of bins of trees */ + +/* A little helper macro for trees */ +#define leftmost_child(t) ((t)->child[0] != 0? (t)->child[0] : (t)->child[1]) + +/* ----------------------------- Segments -------------------------------- */ + +/* + Each malloc space may include non-contiguous segments, held in a + list headed by an embedded malloc_segment record representing the + top-most space. Segments also include flags holding properties of + the space. Large chunks that are directly allocated by mmap are not + included in this list. They are instead independently created and + destroyed without otherwise keeping track of them. + + Segment management mainly comes into play for spaces allocated by + MMAP. Any call to MMAP might or might not return memory that is + adjacent to an existing segment. MORECORE normally contiguously + extends the current space, so this space is almost always adjacent, + which is simpler and faster to deal with. (This is why MORECORE is + used preferentially to MMAP when both are available -- see + sys_alloc.) When allocating using MMAP, we don't use any of the + hinting mechanisms (inconsistently) supported in various + implementations of unix mmap, or distinguish reserving from + committing memory. Instead, we just ask for space, and exploit + contiguity when we get it. It is probably possible to do + better than this on some systems, but no general scheme seems + to be significantly better. + + Management entails a simpler variant of the consolidation scheme + used for chunks to reduce fragmentation -- new adjacent memory is + normally prepended or appended to an existing segment. However, + there are limitations compared to chunk consolidation that mostly + reflect the fact that segment processing is relatively infrequent + (occurring only when getting memory from system) and that we + don't expect to have huge numbers of segments: + + * Segments are not indexed, so traversal requires linear scans. (It + would be possible to index these, but is not worth the extra + overhead and complexity for most programs on most platforms.) + * New segments are only appended to old ones when holding top-most + memory; if they cannot be prepended to others, they are held in + different segments. + + Except for the top-most segment of an mstate, each segment record + is kept at the tail of its segment. Segments are added by pushing + segment records onto the list headed by &mstate.seg for the + containing mstate. + + Segment flags control allocation/merge/deallocation policies: + * If EXTERN_BIT set, then we did not allocate this segment, + and so should not try to deallocate or merge with others. + (This currently holds only for the initial segment passed + into create_mspace_with_base.) + * If USE_MMAP_BIT set, the segment may be merged with + other surrounding mmapped segments and trimmed/de-allocated + using munmap. + * If neither bit is set, then the segment was obtained using + MORECORE so can be merged with surrounding MORECORE'd segments + and deallocated/trimmed using MORECORE with negative arguments. +*/ + +struct malloc_segment { + char* base; /* base address */ + size_t size; /* allocated size */ + struct malloc_segment* next; /* ptr to next segment */ + flag_t sflags; /* mmap and extern flag */ +}; + +#define is_mmapped_segment(S) ((S)->sflags & USE_MMAP_BIT) +#define is_extern_segment(S) ((S)->sflags & EXTERN_BIT) + +typedef struct malloc_segment msegment; +typedef struct malloc_segment* msegmentptr; + +/* ---------------------------- malloc_state ----------------------------- */ + +/* + A malloc_state holds all of the bookkeeping for a space. + The main fields are: + + Top + The topmost chunk of the currently active segment. Its size is + cached in topsize. The actual size of topmost space is + topsize+TOP_FOOT_SIZE, which includes space reserved for adding + fenceposts and segment records if necessary when getting more + space from the system. The size at which to autotrim top is + cached from mparams in trim_check, except that it is disabled if + an autotrim fails. + + Designated victim (dv) + This is the preferred chunk for servicing small requests that + don't have exact fits. It is normally the chunk split off most + recently to service another small request. Its size is cached in + dvsize. The link fields of this chunk are not maintained since it + is not kept in a bin. + + SmallBins + An array of bin headers for free chunks. These bins hold chunks + with sizes less than MIN_LARGE_SIZE bytes. Each bin contains + chunks of all the same size, spaced 8 bytes apart. To simplify + use in double-linked lists, each bin header acts as a malloc_chunk + pointing to the real first node, if it exists (else pointing to + itself). This avoids special-casing for headers. But to avoid + waste, we allocate only the fd/bk pointers of bins, and then use + repositioning tricks to treat these as the fields of a chunk. + + TreeBins + Treebins are pointers to the roots of trees holding a range of + sizes. There are 2 equally spaced treebins for each power of two + from TREE_SHIFT to TREE_SHIFT+16. The last bin holds anything + larger. + + Bin maps + There is one bit map for small bins ("smallmap") and one for + treebins ("treemap). Each bin sets its bit when non-empty, and + clears the bit when empty. Bit operations are then used to avoid + bin-by-bin searching -- nearly all "search" is done without ever + looking at bins that won't be selected. The bit maps + conservatively use 32 bits per map word, even if on 64bit system. + For a good description of some of the bit-based techniques used + here, see Henry S. Warren Jr's book "Hacker's Delight" (and + supplement at http://hackersdelight.org/). Many of these are + intended to reduce the branchiness of paths through malloc etc, as + well as to reduce the number of memory locations read or written. + + Segments + A list of segments headed by an embedded malloc_segment record + representing the initial space. + + Address check support + The least_addr field is the least address ever obtained from + MORECORE or MMAP. Attempted frees and reallocs of any address less + than this are trapped (unless INSECURE is defined). + + Magic tag + A cross-check field that should always hold same value as mparams.magic. + + Max allowed footprint + The maximum allowed bytes to allocate from system (zero means no limit) + + Flags + Bits recording whether to use MMAP, locks, or contiguous MORECORE + + Statistics + Each space keeps track of current and maximum system memory + obtained via MORECORE or MMAP. + + Trim support + Fields holding the amount of unused topmost memory that should trigger + trimming, and a counter to force periodic scanning to release unused + non-topmost segments. + + Locking + If USE_LOCKS is defined, the "mutex" lock is acquired and released + around every public call using this mspace. + + Extension support + A void* pointer and a size_t field that can be used to help implement + extensions to this malloc. +*/ + +/* Bin types, widths and sizes */ +#define NSMALLBINS (32U) +#define NTREEBINS (32U) +#define SMALLBIN_SHIFT (3U) +#define SMALLBIN_WIDTH (SIZE_T_ONE << SMALLBIN_SHIFT) +#define TREEBIN_SHIFT (8U) +#define MIN_LARGE_SIZE (SIZE_T_ONE << TREEBIN_SHIFT) +#define MAX_SMALL_SIZE (MIN_LARGE_SIZE - SIZE_T_ONE) +#define MAX_SMALL_REQUEST (MAX_SMALL_SIZE - CHUNK_ALIGN_MASK - CHUNK_OVERHEAD) + +struct malloc_state { + binmap_t smallmap; + binmap_t treemap; + size_t dvsize; + size_t topsize; + char* least_addr; + mchunkptr dv; + mchunkptr top; + size_t trim_check; + size_t release_checks; + size_t magic; + mchunkptr smallbins[(NSMALLBINS+1)*2]; + tbinptr treebins[NTREEBINS]; + size_t footprint; + size_t max_footprint; + size_t footprint_limit; /* zero means no limit */ + flag_t mflags; +#if USE_LOCKS + MLOCK_T mutex; /* locate lock among fields that rarely change */ +#endif /* USE_LOCKS */ + msegment seg; + void* extp; /* Unused but available for extensions */ + size_t exts; +}; + +typedef struct malloc_state* mstate; + +/* ------------- Global malloc_state and malloc_params ------------------- */ + +/* + malloc_params holds global properties, including those that can be + dynamically set using mallopt. There is a single instance, mparams, + initialized in init_mparams. Note that the non-zeroness of "magic" + also serves as an initialization flag. +*/ + +struct malloc_params { + size_t magic; + size_t page_size; + size_t granularity; + size_t mmap_threshold; + size_t trim_threshold; + flag_t default_mflags; +}; + +static struct malloc_params mparams; + +/* Ensure mparams initialized */ +#define ensure_initialization() (void)(mparams.magic != 0 || init_mparams()) + +#if !ONLY_MSPACES + +/* The global malloc_state used for all non-"mspace" calls */ +static struct malloc_state _gm_; +#define gm (&_gm_) +#define is_global(M) ((M) == &_gm_) + +#endif /* !ONLY_MSPACES */ + +#define is_initialized(M) ((M)->top != 0) + +/* -------------------------- system alloc setup ------------------------- */ + +/* Operations on mflags */ + +#define use_lock(M) ((M)->mflags & USE_LOCK_BIT) +#define enable_lock(M) ((M)->mflags |= USE_LOCK_BIT) +#if USE_LOCKS +#define disable_lock(M) ((M)->mflags &= ~USE_LOCK_BIT) +#else +#define disable_lock(M) +#endif + +#define use_mmap(M) ((M)->mflags & USE_MMAP_BIT) +#define enable_mmap(M) ((M)->mflags |= USE_MMAP_BIT) +#if HAVE_MMAP +#define disable_mmap(M) ((M)->mflags &= ~USE_MMAP_BIT) +#else +#define disable_mmap(M) +#endif + +#define use_noncontiguous(M) ((M)->mflags & USE_NONCONTIGUOUS_BIT) +#define disable_contiguous(M) ((M)->mflags |= USE_NONCONTIGUOUS_BIT) + +#define set_lock(M,L)\ + ((M)->mflags = (L)?\ + ((M)->mflags | USE_LOCK_BIT) :\ + ((M)->mflags & ~USE_LOCK_BIT)) + +/* page-align a size */ +#define page_align(S)\ + (((S) + (mparams.page_size - SIZE_T_ONE)) & ~(mparams.page_size - SIZE_T_ONE)) + +/* granularity-align a size */ +#define granularity_align(S)\ + (((S) + (mparams.granularity - SIZE_T_ONE))\ + & ~(mparams.granularity - SIZE_T_ONE)) + + +/* For mmap, use granularity alignment on windows, else page-align */ +#ifdef WIN32 +#define mmap_align(S) granularity_align(S) +#else +#define mmap_align(S) page_align(S) +#endif + +/* For sys_alloc, enough padding to ensure can malloc request on success */ +#define SYS_ALLOC_PADDING (TOP_FOOT_SIZE + MALLOC_ALIGNMENT) + +#define is_page_aligned(S)\ + (((size_t)(S) & (mparams.page_size - SIZE_T_ONE)) == 0) +#define is_granularity_aligned(S)\ + (((size_t)(S) & (mparams.granularity - SIZE_T_ONE)) == 0) + +/* True if segment S holds address A */ +#define segment_holds(S, A)\ + ((char*)(A) >= S->base && (char*)(A) < S->base + S->size) + +/* Return segment holding given address */ +static msegmentptr segment_holding(mstate m, char* addr) { + msegmentptr sp = &m->seg; + for (;;) { + if (addr >= sp->base && addr < sp->base + sp->size) + return sp; + if ((sp = sp->next) == 0) + return 0; + } +} + +/* Return true if segment contains a segment link */ +static int has_segment_link(mstate m, msegmentptr ss) { + msegmentptr sp = &m->seg; + for (;;) { + if ((char*)sp >= ss->base && (char*)sp < ss->base + ss->size) + return 1; + if ((sp = sp->next) == 0) + return 0; + } +} + +#ifndef MORECORE_CANNOT_TRIM +#define should_trim(M,s) ((s) > (M)->trim_check) +#else /* MORECORE_CANNOT_TRIM */ +#define should_trim(M,s) (0) +#endif /* MORECORE_CANNOT_TRIM */ + +/* + TOP_FOOT_SIZE is padding at the end of a segment, including space + that may be needed to place segment records and fenceposts when new + noncontiguous segments are added. +*/ +#define TOP_FOOT_SIZE\ + (align_offset(chunk2mem(0))+pad_request(sizeof(struct malloc_segment))+MIN_CHUNK_SIZE) + + +/* ------------------------------- Hooks -------------------------------- */ + +/* + PREACTION should be defined to return 0 on success, and nonzero on + failure. If you are not using locking, you can redefine these to do + anything you like. +*/ + +#if USE_LOCKS +#define PREACTION(M) ((use_lock(M))? ACQUIRE_LOCK(&(M)->mutex) : 0) +#define POSTACTION(M) { if (use_lock(M)) RELEASE_LOCK(&(M)->mutex); } +#else /* USE_LOCKS */ + +#ifndef PREACTION +#define PREACTION(M) (0) +#endif /* PREACTION */ + +#ifndef POSTACTION +#define POSTACTION(M) +#endif /* POSTACTION */ + +#endif /* USE_LOCKS */ + +/* + CORRUPTION_ERROR_ACTION is triggered upon detected bad addresses. + USAGE_ERROR_ACTION is triggered on detected bad frees and + reallocs. The argument p is an address that might have triggered the + fault. It is ignored by the two predefined actions, but might be + useful in custom actions that try to help diagnose errors. +*/ + +#if PROCEED_ON_ERROR + +/* A count of the number of corruption errors causing resets */ +int malloc_corruption_error_count; + +/* default corruption action */ +static void reset_on_error(mstate m); + +#define CORRUPTION_ERROR_ACTION(m) reset_on_error(m) +#define USAGE_ERROR_ACTION(m, p) + +#else /* PROCEED_ON_ERROR */ + +#ifndef CORRUPTION_ERROR_ACTION +#define CORRUPTION_ERROR_ACTION(m) ABORT +#endif /* CORRUPTION_ERROR_ACTION */ + +#ifndef USAGE_ERROR_ACTION +#define USAGE_ERROR_ACTION(m,p) ABORT +#endif /* USAGE_ERROR_ACTION */ + +#endif /* PROCEED_ON_ERROR */ + + +/* -------------------------- Debugging setup ---------------------------- */ + +#if ! DEBUG + +#define check_free_chunk(M,P) +#define check_inuse_chunk(M,P) +#define check_malloced_chunk(M,P,N) +#define check_mmapped_chunk(M,P) +#define check_malloc_state(M) +#define check_top_chunk(M,P) + +#else /* DEBUG */ +#define check_free_chunk(M,P) do_check_free_chunk(M,P) +#define check_inuse_chunk(M,P) do_check_inuse_chunk(M,P) +#define check_top_chunk(M,P) do_check_top_chunk(M,P) +#define check_malloced_chunk(M,P,N) do_check_malloced_chunk(M,P,N) +#define check_mmapped_chunk(M,P) do_check_mmapped_chunk(M,P) +#define check_malloc_state(M) do_check_malloc_state(M) + +static void do_check_any_chunk(mstate m, mchunkptr p); +static void do_check_top_chunk(mstate m, mchunkptr p); +static void do_check_mmapped_chunk(mstate m, mchunkptr p); +static void do_check_inuse_chunk(mstate m, mchunkptr p); +static void do_check_free_chunk(mstate m, mchunkptr p); +static void do_check_malloced_chunk(mstate m, void* mem, size_t s); +static void do_check_tree(mstate m, tchunkptr t); +static void do_check_treebin(mstate m, bindex_t i); +static void do_check_smallbin(mstate m, bindex_t i); +static void do_check_malloc_state(mstate m); +static int bin_find(mstate m, mchunkptr x); +static size_t traverse_and_check(mstate m); +#endif /* DEBUG */ + +/* ---------------------------- Indexing Bins ---------------------------- */ + +#define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS) +#define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT) +#define small_index2size(i) ((i) << SMALLBIN_SHIFT) +#define MIN_SMALL_INDEX (small_index(MIN_CHUNK_SIZE)) + +/* addressing by index. See above about smallbin repositioning */ +#define smallbin_at(M, i) ((sbinptr)((char*)&((M)->smallbins[(i)<<1]))) +#define treebin_at(M,i) (&((M)->treebins[i])) + +/* assign tree index for size S to variable I. Use x86 asm if possible */ +#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)) +#define compute_tree_index(S, I)\ +{\ + unsigned int X = S >> TREEBIN_SHIFT;\ + if (X == 0)\ + I = 0;\ + else if (X > 0xFFFF)\ + I = NTREEBINS-1;\ + else {\ + unsigned int K = (unsigned) sizeof(X)*__CHAR_BIT__ - 1 - (unsigned) __builtin_clz(X); \ + I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ + }\ +} + +#elif defined (__INTEL_COMPILER) +#define compute_tree_index(S, I)\ +{\ + size_t X = S >> TREEBIN_SHIFT;\ + if (X == 0)\ + I = 0;\ + else if (X > 0xFFFF)\ + I = NTREEBINS-1;\ + else {\ + unsigned int K = _bit_scan_reverse (X); \ + I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ + }\ +} + +#elif defined(_MSC_VER) && _MSC_VER>=1300 +#define compute_tree_index(S, I)\ +{\ + size_t X = S >> TREEBIN_SHIFT;\ + if (X == 0)\ + I = 0;\ + else if (X > 0xFFFF)\ + I = NTREEBINS-1;\ + else {\ + unsigned int K;\ + _BitScanReverse((DWORD *) &K, (DWORD) X);\ + I = (bindex_t)((K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1)));\ + }\ +} + +#else /* GNUC */ +#define compute_tree_index(S, I)\ +{\ + size_t X = S >> TREEBIN_SHIFT;\ + if (X == 0)\ + I = 0;\ + else if (X > 0xFFFF)\ + I = NTREEBINS-1;\ + else {\ + unsigned int Y = (unsigned int)X;\ + unsigned int N = ((Y - 0x100) >> 16) & 8;\ + unsigned int K = (((Y <<= N) - 0x1000) >> 16) & 4;\ + N += K;\ + N += K = (((Y <<= K) - 0x4000) >> 16) & 2;\ + K = 14 - N + ((Y <<= K) >> 15);\ + I = (K << 1) + ((S >> (K + (TREEBIN_SHIFT-1)) & 1));\ + }\ +} +#endif /* GNUC */ + +/* Bit representing maximum resolved size in a treebin at i */ +#define bit_for_tree_index(i) \ + (i == NTREEBINS-1)? (SIZE_T_BITSIZE-1) : (((i) >> 1) + TREEBIN_SHIFT - 2) + +/* Shift placing maximum resolved bit in a treebin at i as sign bit */ +#define leftshift_for_tree_index(i) \ + ((i == NTREEBINS-1)? 0 : \ + ((SIZE_T_BITSIZE-SIZE_T_ONE) - (((i) >> 1) + TREEBIN_SHIFT - 2))) + +/* The size of the smallest chunk held in bin with index i */ +#define minsize_for_tree_index(i) \ + ((SIZE_T_ONE << (((i) >> 1) + TREEBIN_SHIFT)) | \ + (((size_t)((i) & SIZE_T_ONE)) << (((i) >> 1) + TREEBIN_SHIFT - 1))) + + +/* ------------------------ Operations on bin maps ----------------------- */ + +/* bit corresponding to given index */ +#define idx2bit(i) ((binmap_t)(1) << (i)) + +/* Mark/Clear bits with given index */ +#define mark_smallmap(M,i) ((M)->smallmap |= idx2bit(i)) +#define clear_smallmap(M,i) ((M)->smallmap &= ~idx2bit(i)) +#define smallmap_is_marked(M,i) ((M)->smallmap & idx2bit(i)) + +#define mark_treemap(M,i) ((M)->treemap |= idx2bit(i)) +#define clear_treemap(M,i) ((M)->treemap &= ~idx2bit(i)) +#define treemap_is_marked(M,i) ((M)->treemap & idx2bit(i)) + +/* isolate the least set bit of a bitmap */ +#define least_bit(x) ((x) & -(x)) + +/* mask with all bits to left of least bit of x on */ +#define left_bits(x) ((x<<1) | -(x<<1)) + +/* mask with all bits to left of or equal to least bit of x on */ +#define same_or_left_bits(x) ((x) | -(x)) + +/* index corresponding to given bit. Use x86 asm if possible */ + +#if defined(__GNUC__) && (defined(__i386__) || defined(__x86_64__)) +#define compute_bit2idx(X, I)\ +{\ + unsigned int J;\ + J = __builtin_ctz(X); \ + I = (bindex_t)J;\ +} + +#elif defined (__INTEL_COMPILER) +#define compute_bit2idx(X, I)\ +{\ + unsigned int J;\ + J = _bit_scan_forward (X); \ + I = (bindex_t)J;\ +} + +#elif defined(_MSC_VER) && _MSC_VER>=1300 +#define compute_bit2idx(X, I)\ +{\ + unsigned int J;\ + _BitScanForward((DWORD *) &J, X);\ + I = (bindex_t)J;\ +} + +#elif USE_BUILTIN_FFS +#define compute_bit2idx(X, I) I = ffs(X)-1 + +#else +#define compute_bit2idx(X, I)\ +{\ + unsigned int Y = X - 1;\ + unsigned int K = Y >> (16-4) & 16;\ + unsigned int N = K; Y >>= K;\ + N += K = Y >> (8-3) & 8; Y >>= K;\ + N += K = Y >> (4-2) & 4; Y >>= K;\ + N += K = Y >> (2-1) & 2; Y >>= K;\ + N += K = Y >> (1-0) & 1; Y >>= K;\ + I = (bindex_t)(N + Y);\ +} +#endif /* GNUC */ + + +/* ----------------------- Runtime Check Support ------------------------- */ + +/* + For security, the main invariant is that malloc/free/etc never + writes to a static address other than malloc_state, unless static + malloc_state itself has been corrupted, which cannot occur via + malloc (because of these checks). In essence this means that we + believe all pointers, sizes, maps etc held in malloc_state, but + check all of those linked or offsetted from other embedded data + structures. These checks are interspersed with main code in a way + that tends to minimize their run-time cost. + + When FOOTERS is defined, in addition to range checking, we also + verify footer fields of inuse chunks, which can be used guarantee + that the mstate controlling malloc/free is intact. This is a + streamlined version of the approach described by William Robertson + et al in "Run-time Detection of Heap-based Overflows" LISA'03 + http://www.usenix.org/events/lisa03/tech/robertson.html The footer + of an inuse chunk holds the xor of its mstate and a random seed, + that is checked upon calls to free() and realloc(). This is + (probabalistically) unguessable from outside the program, but can be + computed by any code successfully malloc'ing any chunk, so does not + itself provide protection against code that has already broken + security through some other means. Unlike Robertson et al, we + always dynamically check addresses of all offset chunks (previous, + next, etc). This turns out to be cheaper than relying on hashes. +*/ + +#if !INSECURE +/* Check if address a is at least as high as any from MORECORE or MMAP */ +#define ok_address(M, a) ((char*)(a) >= (M)->least_addr) +/* Check if address of next chunk n is higher than base chunk p */ +#define ok_next(p, n) ((char*)(p) < (char*)(n)) +/* Check if p has inuse status */ +#define ok_inuse(p) is_inuse(p) +/* Check if p has its pinuse bit on */ +#define ok_pinuse(p) pinuse(p) + +#else /* !INSECURE */ +#define ok_address(M, a) (1) +#define ok_next(b, n) (1) +#define ok_inuse(p) (1) +#define ok_pinuse(p) (1) +#endif /* !INSECURE */ + +#if (FOOTERS && !INSECURE) +/* Check if (alleged) mstate m has expected magic field */ +#define ok_magic(M) ((M)->magic == mparams.magic) +#else /* (FOOTERS && !INSECURE) */ +#define ok_magic(M) (1) +#endif /* (FOOTERS && !INSECURE) */ + +/* In gcc, use __builtin_expect to minimize impact of checks */ +#if !INSECURE +#if defined(__GNUC__) && __GNUC__ >= 3 +#define RTCHECK(e) __builtin_expect(e, 1) +#else /* GNUC */ +#define RTCHECK(e) (e) +#endif /* GNUC */ +#else /* !INSECURE */ +#define RTCHECK(e) (1) +#endif /* !INSECURE */ + +/* macros to set up inuse chunks with or without footers */ + +#if !FOOTERS + +#define mark_inuse_foot(M,p,s) + +/* Macros for setting head/foot of non-mmapped chunks */ + +/* Set cinuse bit and pinuse bit of next chunk */ +#define set_inuse(M,p,s)\ + ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ + ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) + +/* Set cinuse and pinuse of this chunk and pinuse of next chunk */ +#define set_inuse_and_pinuse(M,p,s)\ + ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ + ((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT) + +/* Set size, cinuse and pinuse bit of this chunk */ +#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ + ((p)->head = (s|PINUSE_BIT|CINUSE_BIT)) + +#else /* FOOTERS */ + +/* Set foot of inuse chunk to be xor of mstate and seed */ +#define mark_inuse_foot(M,p,s)\ + (((mchunkptr)((char*)(p) + (s)))->prev_foot = ((size_t)(M) ^ mparams.magic)) + +#define get_mstate_for(p)\ + ((mstate)(((mchunkptr)((char*)(p) +\ + (chunksize(p))))->prev_foot ^ mparams.magic)) + +#define set_inuse(M,p,s)\ + ((p)->head = (((p)->head & PINUSE_BIT)|s|CINUSE_BIT),\ + (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT), \ + mark_inuse_foot(M,p,s)) + +#define set_inuse_and_pinuse(M,p,s)\ + ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ + (((mchunkptr)(((char*)(p)) + (s)))->head |= PINUSE_BIT),\ + mark_inuse_foot(M,p,s)) + +#define set_size_and_pinuse_of_inuse_chunk(M, p, s)\ + ((p)->head = (s|PINUSE_BIT|CINUSE_BIT),\ + mark_inuse_foot(M, p, s)) + +#endif /* !FOOTERS */ + +/* ---------------------------- setting mparams -------------------------- */ + +#if LOCK_AT_FORK +static void pre_fork(void) { ACQUIRE_LOCK(&(gm)->mutex); } +static void post_fork_parent(void) { RELEASE_LOCK(&(gm)->mutex); } +static void post_fork_child(void) { INITIAL_LOCK(&(gm)->mutex); } +#endif /* LOCK_AT_FORK */ + +/* Initialize mparams */ +static int init_mparams(void) { +#ifdef NEED_GLOBAL_LOCK_INIT + if (malloc_global_mutex_status <= 0) + init_malloc_global_mutex(); +#endif + + ACQUIRE_MALLOC_GLOBAL_LOCK(); + if (mparams.magic == 0) { + size_t magic; + size_t psize; + size_t gsize; + +#ifndef WIN32 + psize = malloc_getpagesize; + gsize = ((DEFAULT_GRANULARITY != 0)? DEFAULT_GRANULARITY : psize); +#else /* WIN32 */ + { + SYSTEM_INFO system_info; + GetSystemInfo(&system_info); + psize = system_info.dwPageSize; + gsize = ((DEFAULT_GRANULARITY != 0)? + DEFAULT_GRANULARITY : system_info.dwAllocationGranularity); + } +#endif /* WIN32 */ + + /* Sanity-check configuration: + size_t must be unsigned and as wide as pointer type. + ints must be at least 4 bytes. + alignment must be at least 8. + Alignment, min chunk size, and page size must all be powers of 2. + */ + if ((sizeof(size_t) != sizeof(char*)) || + (MAX_SIZE_T < MIN_CHUNK_SIZE) || + (sizeof(int) < 4) || + (MALLOC_ALIGNMENT < (size_t)8U) || + ((MALLOC_ALIGNMENT & (MALLOC_ALIGNMENT-SIZE_T_ONE)) != 0) || + ((MCHUNK_SIZE & (MCHUNK_SIZE-SIZE_T_ONE)) != 0) || + ((gsize & (gsize-SIZE_T_ONE)) != 0) || + ((psize & (psize-SIZE_T_ONE)) != 0)) + ABORT; + mparams.granularity = gsize; + mparams.page_size = psize; + mparams.mmap_threshold = DEFAULT_MMAP_THRESHOLD; + mparams.trim_threshold = DEFAULT_TRIM_THRESHOLD; +#if MORECORE_CONTIGUOUS + mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT; +#else /* MORECORE_CONTIGUOUS */ + mparams.default_mflags = USE_LOCK_BIT|USE_MMAP_BIT|USE_NONCONTIGUOUS_BIT; +#endif /* MORECORE_CONTIGUOUS */ + +#if !ONLY_MSPACES + /* Set up lock for main malloc area */ + gm->mflags = mparams.default_mflags; + (void)INITIAL_LOCK(&gm->mutex); +#endif +#if LOCK_AT_FORK + pthread_atfork(&pre_fork, &post_fork_parent, &post_fork_child); +#endif + + { +#if USE_DEV_RANDOM + int fd; + unsigned char buf[sizeof(size_t)]; + /* Try to use /dev/urandom, else fall back on using time */ + if ((fd = open("/dev/urandom", O_RDONLY)) >= 0 && + read(fd, buf, sizeof(buf)) == sizeof(buf)) { + magic = *((size_t *) buf); + close(fd); + } + else +#endif /* USE_DEV_RANDOM */ +#ifdef WIN32 + magic = (size_t)(GetTickCount() ^ (size_t)0x55555555U); +#elif defined(LACKS_TIME_H) + magic = (size_t)&magic ^ (size_t)0x55555555U; +#else + magic = (size_t)(time(0) ^ (size_t)0x55555555U); +#endif + magic |= (size_t)8U; /* ensure nonzero */ + magic &= ~(size_t)7U; /* improve chances of fault for bad values */ + /* Until memory modes commonly available, use volatile-write */ + (*(volatile size_t *)(&(mparams.magic))) = magic; + } + } + + RELEASE_MALLOC_GLOBAL_LOCK(); + return 1; +} + +/* support for mallopt */ +static int change_mparam(int param_number, int value) { + size_t val; + ensure_initialization(); + val = (value == -1)? MAX_SIZE_T : (size_t)value; + switch(param_number) { + case M_TRIM_THRESHOLD: + mparams.trim_threshold = val; + return 1; + case M_GRANULARITY: + if (val >= mparams.page_size && ((val & (val-1)) == 0)) { + mparams.granularity = val; + return 1; + } + else + return 0; + case M_MMAP_THRESHOLD: + mparams.mmap_threshold = val; + return 1; + default: + return 0; + } +} + +#if DEBUG +/* ------------------------- Debugging Support --------------------------- */ + +/* Check properties of any chunk, whether free, inuse, mmapped etc */ +static void do_check_any_chunk(mstate m, mchunkptr p) { + assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); + assert(ok_address(m, p)); +} + +/* Check properties of top chunk */ +static void do_check_top_chunk(mstate m, mchunkptr p) { + msegmentptr sp = segment_holding(m, (char*)p); + size_t sz = p->head & ~INUSE_BITS; /* third-lowest bit can be set! */ + assert(sp != 0); + assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); + assert(ok_address(m, p)); + assert(sz == m->topsize); + assert(sz > 0); + assert(sz == ((sp->base + sp->size) - (char*)p) - TOP_FOOT_SIZE); + assert(pinuse(p)); + assert(!pinuse(chunk_plus_offset(p, sz))); +} + +/* Check properties of (inuse) mmapped chunks */ +static void do_check_mmapped_chunk(mstate m, mchunkptr p) { + size_t sz = chunksize(p); + size_t len = (sz + (p->prev_foot) + MMAP_FOOT_PAD); + assert(is_mmapped(p)); + assert(use_mmap(m)); + assert((is_aligned(chunk2mem(p))) || (p->head == FENCEPOST_HEAD)); + assert(ok_address(m, p)); + assert(!is_small(sz)); + assert((len & (mparams.page_size-SIZE_T_ONE)) == 0); + assert(chunk_plus_offset(p, sz)->head == FENCEPOST_HEAD); + assert(chunk_plus_offset(p, sz+SIZE_T_SIZE)->head == 0); +} + +/* Check properties of inuse chunks */ +static void do_check_inuse_chunk(mstate m, mchunkptr p) { + do_check_any_chunk(m, p); + assert(is_inuse(p)); + assert(next_pinuse(p)); + /* If not pinuse and not mmapped, previous chunk has OK offset */ + assert(is_mmapped(p) || pinuse(p) || next_chunk(prev_chunk(p)) == p); + if (is_mmapped(p)) + do_check_mmapped_chunk(m, p); +} + +/* Check properties of free chunks */ +static void do_check_free_chunk(mstate m, mchunkptr p) { + size_t sz = chunksize(p); + mchunkptr next = chunk_plus_offset(p, sz); + do_check_any_chunk(m, p); + assert(!is_inuse(p)); + assert(!next_pinuse(p)); + assert (!is_mmapped(p)); + if (p != m->dv && p != m->top) { + if (sz >= MIN_CHUNK_SIZE) { + assert((sz & CHUNK_ALIGN_MASK) == 0); + assert(is_aligned(chunk2mem(p))); + assert(next->prev_foot == sz); + assert(pinuse(p)); + assert (next == m->top || is_inuse(next)); + assert(p->fd->bk == p); + assert(p->bk->fd == p); + } + else /* markers are always of size SIZE_T_SIZE */ + assert(sz == SIZE_T_SIZE); + } +} + +/* Check properties of malloced chunks at the point they are malloced */ +static void do_check_malloced_chunk(mstate m, void* mem, size_t s) { + if (mem != 0) { + mchunkptr p = mem2chunk(mem); + size_t sz = p->head & ~INUSE_BITS; + do_check_inuse_chunk(m, p); + assert((sz & CHUNK_ALIGN_MASK) == 0); + assert(sz >= MIN_CHUNK_SIZE); + assert(sz >= s); + /* unless mmapped, size is less than MIN_CHUNK_SIZE more than request */ + assert(is_mmapped(p) || sz < (s + MIN_CHUNK_SIZE)); + } +} + +/* Check a tree and its subtrees. */ +static void do_check_tree(mstate m, tchunkptr t) { + tchunkptr head = 0; + tchunkptr u = t; + bindex_t tindex = t->index; + size_t tsize = chunksize(t); + bindex_t idx; + compute_tree_index(tsize, idx); + assert(tindex == idx); + assert(tsize >= MIN_LARGE_SIZE); + assert(tsize >= minsize_for_tree_index(idx)); + assert((idx == NTREEBINS-1) || (tsize < minsize_for_tree_index((idx+1)))); + + do { /* traverse through chain of same-sized nodes */ + do_check_any_chunk(m, ((mchunkptr)u)); + assert(u->index == tindex); + assert(chunksize(u) == tsize); + assert(!is_inuse(u)); + assert(!next_pinuse(u)); + assert(u->fd->bk == u); + assert(u->bk->fd == u); + if (u->parent == 0) { + assert(u->child[0] == 0); + assert(u->child[1] == 0); + } + else { + assert(head == 0); /* only one node on chain has parent */ + head = u; + assert(u->parent != u); + assert (u->parent->child[0] == u || + u->parent->child[1] == u || + *((tbinptr*)(u->parent)) == u); + if (u->child[0] != 0) { + assert(u->child[0]->parent == u); + assert(u->child[0] != u); + do_check_tree(m, u->child[0]); + } + if (u->child[1] != 0) { + assert(u->child[1]->parent == u); + assert(u->child[1] != u); + do_check_tree(m, u->child[1]); + } + if (u->child[0] != 0 && u->child[1] != 0) { + assert(chunksize(u->child[0]) < chunksize(u->child[1])); + } + } + u = u->fd; + } while (u != t); + assert(head != 0); +} + +/* Check all the chunks in a treebin. */ +static void do_check_treebin(mstate m, bindex_t i) { + tbinptr* tb = treebin_at(m, i); + tchunkptr t = *tb; + int empty = (m->treemap & (1U << i)) == 0; + if (t == 0) + assert(empty); + if (!empty) + do_check_tree(m, t); +} + +/* Check all the chunks in a smallbin. */ +static void do_check_smallbin(mstate m, bindex_t i) { + sbinptr b = smallbin_at(m, i); + mchunkptr p = b->bk; + unsigned int empty = (m->smallmap & (1U << i)) == 0; + if (p == b) + assert(empty); + if (!empty) { + for (; p != b; p = p->bk) { + size_t size = chunksize(p); + mchunkptr q; + /* each chunk claims to be free */ + do_check_free_chunk(m, p); + /* chunk belongs in bin */ + assert(small_index(size) == i); + assert(p->bk == b || chunksize(p->bk) == chunksize(p)); + /* chunk is followed by an inuse chunk */ + q = next_chunk(p); + if (q->head != FENCEPOST_HEAD) + do_check_inuse_chunk(m, q); + } + } +} + +/* Find x in a bin. Used in other check functions. */ +static int bin_find(mstate m, mchunkptr x) { + size_t size = chunksize(x); + if (is_small(size)) { + bindex_t sidx = small_index(size); + sbinptr b = smallbin_at(m, sidx); + if (smallmap_is_marked(m, sidx)) { + mchunkptr p = b; + do { + if (p == x) + return 1; + } while ((p = p->fd) != b); + } + } + else { + bindex_t tidx; + compute_tree_index(size, tidx); + if (treemap_is_marked(m, tidx)) { + tchunkptr t = *treebin_at(m, tidx); + size_t sizebits = size << leftshift_for_tree_index(tidx); + while (t != 0 && chunksize(t) != size) { + t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; + sizebits <<= 1; + } + if (t != 0) { + tchunkptr u = t; + do { + if (u == (tchunkptr)x) + return 1; + } while ((u = u->fd) != t); + } + } + } + return 0; +} + +/* Traverse each chunk and check it; return total */ +static size_t traverse_and_check(mstate m) { + size_t sum = 0; + if (is_initialized(m)) { + msegmentptr s = &m->seg; + sum += m->topsize + TOP_FOOT_SIZE; + while (s != 0) { + mchunkptr q = align_as_chunk(s->base); + mchunkptr lastq = 0; + assert(pinuse(q)); + while (segment_holds(s, q) && + q != m->top && q->head != FENCEPOST_HEAD) { + sum += chunksize(q); + if (is_inuse(q)) { + assert(!bin_find(m, q)); + do_check_inuse_chunk(m, q); + } + else { + assert(q == m->dv || bin_find(m, q)); + assert(lastq == 0 || is_inuse(lastq)); /* Not 2 consecutive free */ + do_check_free_chunk(m, q); + } + lastq = q; + q = next_chunk(q); + } + s = s->next; + } + } + return sum; +} + + +/* Check all properties of malloc_state. */ +static void do_check_malloc_state(mstate m) { + bindex_t i; + size_t total; + /* check bins */ + for (i = 0; i < NSMALLBINS; ++i) + do_check_smallbin(m, i); + for (i = 0; i < NTREEBINS; ++i) + do_check_treebin(m, i); + + if (m->dvsize != 0) { /* check dv chunk */ + do_check_any_chunk(m, m->dv); + assert(m->dvsize == chunksize(m->dv)); + assert(m->dvsize >= MIN_CHUNK_SIZE); + assert(bin_find(m, m->dv) == 0); + } + + if (m->top != 0) { /* check top chunk */ + do_check_top_chunk(m, m->top); + /*assert(m->topsize == chunksize(m->top)); redundant */ + assert(m->topsize > 0); + assert(bin_find(m, m->top) == 0); + } + + total = traverse_and_check(m); + assert(total <= m->footprint); + assert(m->footprint <= m->max_footprint); +} +#endif /* DEBUG */ + +/* ----------------------------- statistics ------------------------------ */ + +#if !NO_MALLINFO +static struct mallinfo internal_mallinfo(mstate m) { + struct mallinfo nm = { 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 }; + ensure_initialization(); + if (!PREACTION(m)) { + check_malloc_state(m); + if (is_initialized(m)) { + size_t nfree = SIZE_T_ONE; /* top always free */ + size_t mfree = m->topsize + TOP_FOOT_SIZE; + size_t sum = mfree; + msegmentptr s = &m->seg; + while (s != 0) { + mchunkptr q = align_as_chunk(s->base); + while (segment_holds(s, q) && + q != m->top && q->head != FENCEPOST_HEAD) { + size_t sz = chunksize(q); + sum += sz; + if (!is_inuse(q)) { + mfree += sz; + ++nfree; + } + q = next_chunk(q); + } + s = s->next; + } + + nm.arena = sum; + nm.ordblks = nfree; + nm.hblkhd = m->footprint - sum; + nm.usmblks = m->max_footprint; + nm.uordblks = m->footprint - mfree; + nm.fordblks = mfree; + nm.keepcost = m->topsize; + } + + POSTACTION(m); + } + return nm; +} +#endif /* !NO_MALLINFO */ + +#if !NO_MALLOC_STATS +static void internal_malloc_stats(mstate m) { + ensure_initialization(); + if (!PREACTION(m)) { + size_t maxfp = 0; + size_t fp = 0; + size_t used = 0; + check_malloc_state(m); + if (is_initialized(m)) { + msegmentptr s = &m->seg; + maxfp = m->max_footprint; + fp = m->footprint; + used = fp - (m->topsize + TOP_FOOT_SIZE); + + while (s != 0) { + mchunkptr q = align_as_chunk(s->base); + while (segment_holds(s, q) && + q != m->top && q->head != FENCEPOST_HEAD) { + if (!is_inuse(q)) + used -= chunksize(q); + q = next_chunk(q); + } + s = s->next; + } + } + POSTACTION(m); /* drop lock */ + fprintf(stderr, "max system bytes = %10lu\n", (unsigned long)(maxfp)); + fprintf(stderr, "system bytes = %10lu\n", (unsigned long)(fp)); + fprintf(stderr, "in use bytes = %10lu\n", (unsigned long)(used)); + } +} +#endif /* NO_MALLOC_STATS */ + +/* ----------------------- Operations on smallbins ----------------------- */ + +/* + Various forms of linking and unlinking are defined as macros. Even + the ones for trees, which are very long but have very short typical + paths. This is ugly but reduces reliance on inlining support of + compilers. +*/ + +/* Link a free chunk into a smallbin */ +#define insert_small_chunk(M, P, S) {\ + bindex_t I = small_index(S);\ + mchunkptr B = smallbin_at(M, I);\ + mchunkptr F = B;\ + assert(S >= MIN_CHUNK_SIZE);\ + if (!smallmap_is_marked(M, I))\ + mark_smallmap(M, I);\ + else if (RTCHECK(ok_address(M, B->fd)))\ + F = B->fd;\ + else {\ + CORRUPTION_ERROR_ACTION(M);\ + }\ + B->fd = P;\ + F->bk = P;\ + P->fd = F;\ + P->bk = B;\ +} + +/* Unlink a chunk from a smallbin */ +#define unlink_small_chunk(M, P, S) {\ + mchunkptr F = P->fd;\ + mchunkptr B = P->bk;\ + bindex_t I = small_index(S);\ + assert(P != B);\ + assert(P != F);\ + assert(chunksize(P) == small_index2size(I));\ + if (RTCHECK(F == smallbin_at(M,I) || (ok_address(M, F) && F->bk == P))) { \ + if (B == F) {\ + clear_smallmap(M, I);\ + }\ + else if (RTCHECK(B == smallbin_at(M,I) ||\ + (ok_address(M, B) && B->fd == P))) {\ + F->bk = B;\ + B->fd = F;\ + }\ + else {\ + CORRUPTION_ERROR_ACTION(M);\ + }\ + }\ + else {\ + CORRUPTION_ERROR_ACTION(M);\ + }\ +} + +/* Unlink the first chunk from a smallbin */ +#define unlink_first_small_chunk(M, B, P, I) {\ + mchunkptr F = P->fd;\ + assert(P != B);\ + assert(P != F);\ + assert(chunksize(P) == small_index2size(I));\ + if (B == F) {\ + clear_smallmap(M, I);\ + }\ + else if (RTCHECK(ok_address(M, F) && F->bk == P)) {\ + F->bk = B;\ + B->fd = F;\ + }\ + else {\ + CORRUPTION_ERROR_ACTION(M);\ + }\ +} + +/* Replace dv node, binning the old one */ +/* Used only when dvsize known to be small */ +#define replace_dv(M, P, S) {\ + size_t DVS = M->dvsize;\ + assert(is_small(DVS));\ + if (DVS != 0) {\ + mchunkptr DV = M->dv;\ + insert_small_chunk(M, DV, DVS);\ + }\ + M->dvsize = S;\ + M->dv = P;\ +} + +/* ------------------------- Operations on trees ------------------------- */ + +/* Insert chunk into tree */ +#define insert_large_chunk(M, X, S) {\ + tbinptr* H;\ + bindex_t I;\ + compute_tree_index(S, I);\ + H = treebin_at(M, I);\ + X->index = I;\ + X->child[0] = X->child[1] = 0;\ + if (!treemap_is_marked(M, I)) {\ + mark_treemap(M, I);\ + *H = X;\ + X->parent = (tchunkptr)H;\ + X->fd = X->bk = X;\ + }\ + else {\ + tchunkptr T = *H;\ + size_t K = S << leftshift_for_tree_index(I);\ + for (;;) {\ + if (chunksize(T) != S) {\ + tchunkptr* C = &(T->child[(K >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]);\ + K <<= 1;\ + if (*C != 0)\ + T = *C;\ + else if (RTCHECK(ok_address(M, C))) {\ + *C = X;\ + X->parent = T;\ + X->fd = X->bk = X;\ + break;\ + }\ + else {\ + CORRUPTION_ERROR_ACTION(M);\ + break;\ + }\ + }\ + else {\ + tchunkptr F = T->fd;\ + if (RTCHECK(ok_address(M, T) && ok_address(M, F))) {\ + T->fd = F->bk = X;\ + X->fd = F;\ + X->bk = T;\ + X->parent = 0;\ + break;\ + }\ + else {\ + CORRUPTION_ERROR_ACTION(M);\ + break;\ + }\ + }\ + }\ + }\ +} + +/* + Unlink steps: + + 1. If x is a chained node, unlink it from its same-sized fd/bk links + and choose its bk node as its replacement. + 2. If x was the last node of its size, but not a leaf node, it must + be replaced with a leaf node (not merely one with an open left or + right), to make sure that lefts and rights of descendents + correspond properly to bit masks. We use the rightmost descendent + of x. We could use any other leaf, but this is easy to locate and + tends to counteract removal of leftmosts elsewhere, and so keeps + paths shorter than minimally guaranteed. This doesn't loop much + because on average a node in a tree is near the bottom. + 3. If x is the base of a chain (i.e., has parent links) relink + x's parent and children to x's replacement (or null if none). +*/ + +#define unlink_large_chunk(M, X) {\ + tchunkptr XP = X->parent;\ + tchunkptr R;\ + if (X->bk != X) {\ + tchunkptr F = X->fd;\ + R = X->bk;\ + if (RTCHECK(ok_address(M, F) && F->bk == X && R->fd == X)) {\ + F->bk = R;\ + R->fd = F;\ + }\ + else {\ + CORRUPTION_ERROR_ACTION(M);\ + }\ + }\ + else {\ + tchunkptr* RP;\ + if (((R = *(RP = &(X->child[1]))) != 0) ||\ + ((R = *(RP = &(X->child[0]))) != 0)) {\ + tchunkptr* CP;\ + while ((*(CP = &(R->child[1])) != 0) ||\ + (*(CP = &(R->child[0])) != 0)) {\ + R = *(RP = CP);\ + }\ + if (RTCHECK(ok_address(M, RP)))\ + *RP = 0;\ + else {\ + CORRUPTION_ERROR_ACTION(M);\ + }\ + }\ + }\ + if (XP != 0) {\ + tbinptr* H = treebin_at(M, X->index);\ + if (X == *H) {\ + if ((*H = R) == 0) \ + clear_treemap(M, X->index);\ + }\ + else if (RTCHECK(ok_address(M, XP))) {\ + if (XP->child[0] == X) \ + XP->child[0] = R;\ + else \ + XP->child[1] = R;\ + }\ + else\ + CORRUPTION_ERROR_ACTION(M);\ + if (R != 0) {\ + if (RTCHECK(ok_address(M, R))) {\ + tchunkptr C0, C1;\ + R->parent = XP;\ + if ((C0 = X->child[0]) != 0) {\ + if (RTCHECK(ok_address(M, C0))) {\ + R->child[0] = C0;\ + C0->parent = R;\ + }\ + else\ + CORRUPTION_ERROR_ACTION(M);\ + }\ + if ((C1 = X->child[1]) != 0) {\ + if (RTCHECK(ok_address(M, C1))) {\ + R->child[1] = C1;\ + C1->parent = R;\ + }\ + else\ + CORRUPTION_ERROR_ACTION(M);\ + }\ + }\ + else\ + CORRUPTION_ERROR_ACTION(M);\ + }\ + }\ +} + +/* Relays to large vs small bin operations */ + +#define insert_chunk(M, P, S)\ + if (is_small(S)) insert_small_chunk(M, P, S)\ + else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); } + +#define unlink_chunk(M, P, S)\ + if (is_small(S)) unlink_small_chunk(M, P, S)\ + else { tchunkptr TP = (tchunkptr)(P); unlink_large_chunk(M, TP); } + + +/* Relays to internal calls to malloc/free from realloc, memalign etc */ + +#if ONLY_MSPACES +#define internal_malloc(m, b) mspace_malloc(m, b) +#define internal_free(m, mem) mspace_free(m,mem); +#else /* ONLY_MSPACES */ +#if MSPACES +#define internal_malloc(m, b)\ + ((m == gm)? dlmalloc(b) : mspace_malloc(m, b)) +#define internal_free(m, mem)\ + if (m == gm) dlfree(mem); else mspace_free(m,mem); +#else /* MSPACES */ +#define internal_malloc(m, b) dlmalloc(b) +#define internal_free(m, mem) dlfree(mem) +#endif /* MSPACES */ +#endif /* ONLY_MSPACES */ + +/* ----------------------- Direct-mmapping chunks ----------------------- */ + +/* + Directly mmapped chunks are set up with an offset to the start of + the mmapped region stored in the prev_foot field of the chunk. This + allows reconstruction of the required argument to MUNMAP when freed, + and also allows adjustment of the returned chunk to meet alignment + requirements (especially in memalign). +*/ + +/* Malloc using mmap */ +static void* mmap_alloc(mstate m, size_t nb) { + size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); + if (m->footprint_limit != 0) { + size_t fp = m->footprint + mmsize; + if (fp <= m->footprint || fp > m->footprint_limit) + return 0; + } + if (mmsize > nb) { /* Check for wrap around 0 */ + char* mm = (char*)(CALL_DIRECT_MMAP(mmsize)); + if (mm != CMFAIL) { + size_t offset = align_offset(chunk2mem(mm)); + size_t psize = mmsize - offset - MMAP_FOOT_PAD; + mchunkptr p = (mchunkptr)(mm + offset); + p->prev_foot = offset; + p->head = psize; + mark_inuse_foot(m, p, psize); + chunk_plus_offset(p, psize)->head = FENCEPOST_HEAD; + chunk_plus_offset(p, psize+SIZE_T_SIZE)->head = 0; + + if (m->least_addr == 0 || mm < m->least_addr) + m->least_addr = mm; + if ((m->footprint += mmsize) > m->max_footprint) + m->max_footprint = m->footprint; + assert(is_aligned(chunk2mem(p))); + check_mmapped_chunk(m, p); + return chunk2mem(p); + } + } + return 0; +} + +/* Realloc using mmap */ +static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) { + size_t oldsize = chunksize(oldp); + (void)flags; /* placate people compiling -Wunused */ + if (is_small(nb)) /* Can't shrink mmap regions below small size */ + return 0; + /* Keep old chunk if big enough but not too big */ + if (oldsize >= nb + SIZE_T_SIZE && + (oldsize - nb) <= (mparams.granularity << 1)) + return oldp; + else { + size_t offset = oldp->prev_foot; + size_t oldmmsize = oldsize + offset + MMAP_FOOT_PAD; + size_t newmmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); + char* cp = (char*)CALL_MREMAP((char*)oldp - offset, + oldmmsize, newmmsize, flags); + if (cp != CMFAIL) { + mchunkptr newp = (mchunkptr)(cp + offset); + size_t psize = newmmsize - offset - MMAP_FOOT_PAD; + newp->head = psize; + mark_inuse_foot(m, newp, psize); + chunk_plus_offset(newp, psize)->head = FENCEPOST_HEAD; + chunk_plus_offset(newp, psize+SIZE_T_SIZE)->head = 0; + + if (cp < m->least_addr) + m->least_addr = cp; + if ((m->footprint += newmmsize - oldmmsize) > m->max_footprint) + m->max_footprint = m->footprint; + check_mmapped_chunk(m, newp); + return newp; + } + } + return 0; +} + + +/* -------------------------- mspace management -------------------------- */ + +/* Initialize top chunk and its size */ +static void init_top(mstate m, mchunkptr p, size_t psize) { + /* Ensure alignment */ + size_t offset = align_offset(chunk2mem(p)); + p = (mchunkptr)((char*)p + offset); + psize -= offset; + + m->top = p; + m->topsize = psize; + p->head = psize | PINUSE_BIT; + /* set size of fake trailing chunk holding overhead space only once */ + chunk_plus_offset(p, psize)->head = TOP_FOOT_SIZE; + m->trim_check = mparams.trim_threshold; /* reset on each update */ +} + +/* Initialize bins for a new mstate that is otherwise zeroed out */ +static void init_bins(mstate m) { + /* Establish circular links for smallbins */ + bindex_t i; + for (i = 0; i < NSMALLBINS; ++i) { + sbinptr bin = smallbin_at(m,i); + bin->fd = bin->bk = bin; + } +} + +#if PROCEED_ON_ERROR + +/* default corruption action */ +static void reset_on_error(mstate m) { + int i; + ++malloc_corruption_error_count; + /* Reinitialize fields to forget about all memory */ + m->smallmap = m->treemap = 0; + m->dvsize = m->topsize = 0; + m->seg.base = 0; + m->seg.size = 0; + m->seg.next = 0; + m->top = m->dv = 0; + for (i = 0; i < NTREEBINS; ++i) + *treebin_at(m, i) = 0; + init_bins(m); +} +#endif /* PROCEED_ON_ERROR */ + +/* Allocate chunk and prepend remainder with chunk in successor base. */ +static void* prepend_alloc(mstate m, char* newbase, char* oldbase, + size_t nb) { + mchunkptr p = align_as_chunk(newbase); + mchunkptr oldfirst = align_as_chunk(oldbase); + size_t psize = (char*)oldfirst - (char*)p; + mchunkptr q = chunk_plus_offset(p, nb); + size_t qsize = psize - nb; + set_size_and_pinuse_of_inuse_chunk(m, p, nb); + + assert((char*)oldfirst > (char*)q); + assert(pinuse(oldfirst)); + assert(qsize >= MIN_CHUNK_SIZE); + + /* consolidate remainder with first chunk of old base */ + if (oldfirst == m->top) { + size_t tsize = m->topsize += qsize; + m->top = q; + q->head = tsize | PINUSE_BIT; + check_top_chunk(m, q); + } + else if (oldfirst == m->dv) { + size_t dsize = m->dvsize += qsize; + m->dv = q; + set_size_and_pinuse_of_free_chunk(q, dsize); + } + else { + if (!is_inuse(oldfirst)) { + size_t nsize = chunksize(oldfirst); + unlink_chunk(m, oldfirst, nsize); + oldfirst = chunk_plus_offset(oldfirst, nsize); + qsize += nsize; + } + set_free_with_pinuse(q, qsize, oldfirst); + insert_chunk(m, q, qsize); + check_free_chunk(m, q); + } + + check_malloced_chunk(m, chunk2mem(p), nb); + return chunk2mem(p); +} + +/* Add a segment to hold a new noncontiguous region */ +static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) { + /* Determine locations and sizes of segment, fenceposts, old top */ + char* old_top = (char*)m->top; + msegmentptr oldsp = segment_holding(m, old_top); + char* old_end = oldsp->base + oldsp->size; + size_t ssize = pad_request(sizeof(struct malloc_segment)); + char* rawsp = old_end - (ssize + FOUR_SIZE_T_SIZES + CHUNK_ALIGN_MASK); + size_t offset = align_offset(chunk2mem(rawsp)); + char* asp = rawsp + offset; + char* csp = (asp < (old_top + MIN_CHUNK_SIZE))? old_top : asp; + mchunkptr sp = (mchunkptr)csp; + msegmentptr ss = (msegmentptr)(chunk2mem(sp)); + mchunkptr tnext = chunk_plus_offset(sp, ssize); + mchunkptr p = tnext; + int nfences = 0; + + /* reset top to new space */ + init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); + + /* Set up segment record */ + assert(is_aligned(ss)); + set_size_and_pinuse_of_inuse_chunk(m, sp, ssize); + *ss = m->seg; /* Push current record */ + m->seg.base = tbase; + m->seg.size = tsize; + m->seg.sflags = mmapped; + m->seg.next = ss; + + /* Insert trailing fenceposts */ + for (;;) { + mchunkptr nextp = chunk_plus_offset(p, SIZE_T_SIZE); + p->head = FENCEPOST_HEAD; + ++nfences; + if ((char*)(&(nextp->head)) < old_end) + p = nextp; + else + break; + } + assert(nfences >= 2); + + /* Insert the rest of old top into a bin as an ordinary free chunk */ + if (csp != old_top) { + mchunkptr q = (mchunkptr)old_top; + size_t psize = csp - old_top; + mchunkptr tn = chunk_plus_offset(q, psize); + set_free_with_pinuse(q, psize, tn); + insert_chunk(m, q, psize); + } + + check_top_chunk(m, m->top); +} + +/* -------------------------- System allocation -------------------------- */ + +/* Get memory from system using MORECORE or MMAP */ +static void* sys_alloc(mstate m, size_t nb) { + char* tbase = CMFAIL; + size_t tsize = 0; + flag_t mmap_flag = 0; + size_t asize; /* allocation size */ + + ensure_initialization(); + + /* Directly map large chunks, but only if already initialized */ + if (use_mmap(m) && nb >= mparams.mmap_threshold && m->topsize != 0) { + void* mem = mmap_alloc(m, nb); + if (mem != 0) + return mem; + } + + asize = granularity_align(nb + SYS_ALLOC_PADDING); + if (asize <= nb) + return 0; /* wraparound */ + if (m->footprint_limit != 0) { + size_t fp = m->footprint + asize; + if (fp <= m->footprint || fp > m->footprint_limit) + return 0; + } + + /* + Try getting memory in any of three ways (in most-preferred to + least-preferred order): + 1. A call to MORECORE that can normally contiguously extend memory. + (disabled if not MORECORE_CONTIGUOUS or not HAVE_MORECORE or + or main space is mmapped or a previous contiguous call failed) + 2. A call to MMAP new space (disabled if not HAVE_MMAP). + Note that under the default settings, if MORECORE is unable to + fulfill a request, and HAVE_MMAP is true, then mmap is + used as a noncontiguous system allocator. This is a useful backup + strategy for systems with holes in address spaces -- in this case + sbrk cannot contiguously expand the heap, but mmap may be able to + find space. + 3. A call to MORECORE that cannot usually contiguously extend memory. + (disabled if not HAVE_MORECORE) + + In all cases, we need to request enough bytes from system to ensure + we can malloc nb bytes upon success, so pad with enough space for + top_foot, plus alignment-pad to make sure we don't lose bytes if + not on boundary, and round this up to a granularity unit. + */ + + if (MORECORE_CONTIGUOUS && !use_noncontiguous(m)) { + char* br = CMFAIL; + size_t ssize = asize; /* sbrk call size */ + msegmentptr ss = (m->top == 0)? 0 : segment_holding(m, (char*)m->top); + ACQUIRE_MALLOC_GLOBAL_LOCK(); + + if (ss == 0) { /* First time through or recovery */ + char* base = (char*)CALL_MORECORE(0); + if (base != CMFAIL) { + size_t fp; + /* Adjust to end on a page boundary */ + if (!is_page_aligned(base)) + ssize += (page_align((size_t)base) - (size_t)base); + fp = m->footprint + ssize; /* recheck limits */ + if (ssize > nb && ssize < HALF_MAX_SIZE_T && + (m->footprint_limit == 0 || + (fp > m->footprint && fp <= m->footprint_limit)) && + (br = (char*)(CALL_MORECORE(ssize))) == base) { + tbase = base; + tsize = ssize; + } + } + } + else { + /* Subtract out existing available top space from MORECORE request. */ + ssize = granularity_align(nb - m->topsize + SYS_ALLOC_PADDING); + /* Use mem here only if it did continuously extend old space */ + if (ssize < HALF_MAX_SIZE_T && + (br = (char*)(CALL_MORECORE(ssize))) == ss->base+ss->size) { + tbase = br; + tsize = ssize; + } + } + + if (tbase == CMFAIL) { /* Cope with partial failure */ + if (br != CMFAIL) { /* Try to use/extend the space we did get */ + if (ssize < HALF_MAX_SIZE_T && + ssize < nb + SYS_ALLOC_PADDING) { + size_t esize = granularity_align(nb + SYS_ALLOC_PADDING - ssize); + if (esize < HALF_MAX_SIZE_T) { + char* end = (char*)CALL_MORECORE(esize); + if (end != CMFAIL) + ssize += esize; + else { /* Can't use; try to release */ + (void) CALL_MORECORE(-ssize); + br = CMFAIL; + } + } + } + } + if (br != CMFAIL) { /* Use the space we did get */ + tbase = br; + tsize = ssize; + } + else + disable_contiguous(m); /* Don't try contiguous path in the future */ + } + + RELEASE_MALLOC_GLOBAL_LOCK(); + } + + if (HAVE_MMAP && tbase == CMFAIL) { /* Try MMAP */ + char* mp = (char*)(CALL_MMAP(asize)); + if (mp != CMFAIL) { + tbase = mp; + tsize = asize; + mmap_flag = USE_MMAP_BIT; + } + } + + if (HAVE_MORECORE && tbase == CMFAIL) { /* Try noncontiguous MORECORE */ + if (asize < HALF_MAX_SIZE_T) { + char* br = CMFAIL; + char* end = CMFAIL; + ACQUIRE_MALLOC_GLOBAL_LOCK(); + br = (char*)(CALL_MORECORE(asize)); + end = (char*)(CALL_MORECORE(0)); + RELEASE_MALLOC_GLOBAL_LOCK(); + if (br != CMFAIL && end != CMFAIL && br < end) { + size_t ssize = end - br; + if (ssize > nb + TOP_FOOT_SIZE) { + tbase = br; + tsize = ssize; + } + } + } + } + + if (tbase != CMFAIL) { + + if ((m->footprint += tsize) > m->max_footprint) + m->max_footprint = m->footprint; + + if (!is_initialized(m)) { /* first-time initialization */ + if (m->least_addr == 0 || tbase < m->least_addr) + m->least_addr = tbase; + m->seg.base = tbase; + m->seg.size = tsize; + m->seg.sflags = mmap_flag; + m->magic = mparams.magic; + m->release_checks = MAX_RELEASE_CHECK_RATE; + init_bins(m); +#if !ONLY_MSPACES + if (is_global(m)) + init_top(m, (mchunkptr)tbase, tsize - TOP_FOOT_SIZE); + else +#endif + { + /* Offset top by embedded malloc_state */ + mchunkptr mn = next_chunk(mem2chunk(m)); + init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) -TOP_FOOT_SIZE); + } + } + + else { + /* Try to merge with an existing segment */ + msegmentptr sp = &m->seg; + /* Only consider most recent segment if traversal suppressed */ + while (sp != 0 && tbase != sp->base + sp->size) + sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; + if (sp != 0 && + !is_extern_segment(sp) && + (sp->sflags & USE_MMAP_BIT) == mmap_flag && + segment_holds(sp, m->top)) { /* append */ + sp->size += tsize; + init_top(m, m->top, m->topsize + tsize); + } + else { + if (tbase < m->least_addr) + m->least_addr = tbase; + sp = &m->seg; + while (sp != 0 && sp->base != tbase + tsize) + sp = (NO_SEGMENT_TRAVERSAL) ? 0 : sp->next; + if (sp != 0 && + !is_extern_segment(sp) && + (sp->sflags & USE_MMAP_BIT) == mmap_flag) { + char* oldbase = sp->base; + sp->base = tbase; + sp->size += tsize; + return prepend_alloc(m, tbase, oldbase, nb); + } + else + add_segment(m, tbase, tsize, mmap_flag); + } + } + + if (nb < m->topsize) { /* Allocate from new or extended top space */ + size_t rsize = m->topsize -= nb; + mchunkptr p = m->top; + mchunkptr r = m->top = chunk_plus_offset(p, nb); + r->head = rsize | PINUSE_BIT; + set_size_and_pinuse_of_inuse_chunk(m, p, nb); + check_top_chunk(m, m->top); + check_malloced_chunk(m, chunk2mem(p), nb); + return chunk2mem(p); + } + } + + MALLOC_FAILURE_ACTION; + return 0; +} + +/* ----------------------- system deallocation -------------------------- */ + +/* Unmap and unlink any mmapped segments that don't contain used chunks */ +static size_t release_unused_segments(mstate m) { + size_t released = 0; + int nsegs = 0; + msegmentptr pred = &m->seg; + msegmentptr sp = pred->next; + while (sp != 0) { + char* base = sp->base; + size_t size = sp->size; + msegmentptr next = sp->next; + ++nsegs; + if (is_mmapped_segment(sp) && !is_extern_segment(sp)) { + mchunkptr p = align_as_chunk(base); + size_t psize = chunksize(p); + /* Can unmap if first chunk holds entire segment and not pinned */ + if (!is_inuse(p) && (char*)p + psize >= base + size - TOP_FOOT_SIZE) { + tchunkptr tp = (tchunkptr)p; + assert(segment_holds(sp, (char*)sp)); + if (p == m->dv) { + m->dv = 0; + m->dvsize = 0; + } + else { + unlink_large_chunk(m, tp); + } + if (CALL_MUNMAP(base, size) == 0) { + released += size; + m->footprint -= size; + /* unlink obsoleted record */ + sp = pred; + sp->next = next; + } + else { /* back out if cannot unmap */ + insert_large_chunk(m, tp, psize); + } + } + } + if (NO_SEGMENT_TRAVERSAL) /* scan only first segment */ + break; + pred = sp; + sp = next; + } + /* Reset check counter */ + m->release_checks = (((size_t) nsegs > (size_t) MAX_RELEASE_CHECK_RATE)? + (size_t) nsegs : (size_t) MAX_RELEASE_CHECK_RATE); + return released; +} + +static int sys_trim(mstate m, size_t pad) { + size_t released = 0; + ensure_initialization(); + if (pad < MAX_REQUEST && is_initialized(m)) { + pad += TOP_FOOT_SIZE; /* ensure enough room for segment overhead */ + + if (m->topsize > pad) { + /* Shrink top space in granularity-size units, keeping at least one */ + size_t unit = mparams.granularity; + size_t extra = ((m->topsize - pad + (unit - SIZE_T_ONE)) / unit - + SIZE_T_ONE) * unit; + msegmentptr sp = segment_holding(m, (char*)m->top); + + if (!is_extern_segment(sp)) { + if (is_mmapped_segment(sp)) { + if (HAVE_MMAP && + sp->size >= extra && + !has_segment_link(m, sp)) { /* can't shrink if pinned */ + size_t newsize = sp->size - extra; + (void)newsize; /* placate people compiling -Wunused-variable */ + /* Prefer mremap, fall back to munmap */ + if ((CALL_MREMAP(sp->base, sp->size, newsize, 0) != MFAIL) || + (CALL_MUNMAP(sp->base + newsize, extra) == 0)) { + released = extra; + } + } + } + else if (HAVE_MORECORE) { + if (extra >= HALF_MAX_SIZE_T) /* Avoid wrapping negative */ + extra = (HALF_MAX_SIZE_T) + SIZE_T_ONE - unit; + ACQUIRE_MALLOC_GLOBAL_LOCK(); + { + /* Make sure end of memory is where we last set it. */ + char* old_br = (char*)(CALL_MORECORE(0)); + if (old_br == sp->base + sp->size) { + char* rel_br = (char*)(CALL_MORECORE(-extra)); + char* new_br = (char*)(CALL_MORECORE(0)); + if (rel_br != CMFAIL && new_br < old_br) + released = old_br - new_br; + } + } + RELEASE_MALLOC_GLOBAL_LOCK(); + } + } + + if (released != 0) { + sp->size -= released; + m->footprint -= released; + init_top(m, m->top, m->topsize - released); + check_top_chunk(m, m->top); + } + } + + /* Unmap any unused mmapped segments */ + if (HAVE_MMAP) + released += release_unused_segments(m); + + /* On failure, disable autotrim to avoid repeated failed future calls */ + if (released == 0 && m->topsize > m->trim_check) + m->trim_check = MAX_SIZE_T; + } + + return (released != 0)? 1 : 0; +} + +/* Consolidate and bin a chunk. Differs from exported versions + of free mainly in that the chunk need not be marked as inuse. +*/ +static void dispose_chunk(mstate m, mchunkptr p, size_t psize) { + mchunkptr next = chunk_plus_offset(p, psize); + if (!pinuse(p)) { + mchunkptr prev; + size_t prevsize = p->prev_foot; + if (is_mmapped(p)) { + psize += prevsize + MMAP_FOOT_PAD; + if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) + m->footprint -= psize; + return; + } + prev = chunk_minus_offset(p, prevsize); + psize += prevsize; + p = prev; + if (RTCHECK(ok_address(m, prev))) { /* consolidate backward */ + if (p != m->dv) { + unlink_chunk(m, p, prevsize); + } + else if ((next->head & INUSE_BITS) == INUSE_BITS) { + m->dvsize = psize; + set_free_with_pinuse(p, psize, next); + return; + } + } + else { + CORRUPTION_ERROR_ACTION(m); + return; + } + } + if (RTCHECK(ok_address(m, next))) { + if (!cinuse(next)) { /* consolidate forward */ + if (next == m->top) { + size_t tsize = m->topsize += psize; + m->top = p; + p->head = tsize | PINUSE_BIT; + if (p == m->dv) { + m->dv = 0; + m->dvsize = 0; + } + return; + } + else if (next == m->dv) { + size_t dsize = m->dvsize += psize; + m->dv = p; + set_size_and_pinuse_of_free_chunk(p, dsize); + return; + } + else { + size_t nsize = chunksize(next); + psize += nsize; + unlink_chunk(m, next, nsize); + set_size_and_pinuse_of_free_chunk(p, psize); + if (p == m->dv) { + m->dvsize = psize; + return; + } + } + } + else { + set_free_with_pinuse(p, psize, next); + } + insert_chunk(m, p, psize); + } + else { + CORRUPTION_ERROR_ACTION(m); + } +} + +/* ---------------------------- malloc --------------------------- */ + +/* allocate a large request from the best fitting chunk in a treebin */ +static void* tmalloc_large(mstate m, size_t nb) { + tchunkptr v = 0; + size_t rsize = -nb; /* Unsigned negation */ + tchunkptr t; + bindex_t idx; + compute_tree_index(nb, idx); + if ((t = *treebin_at(m, idx)) != 0) { + /* Traverse tree for this bin looking for node with size == nb */ + size_t sizebits = nb << leftshift_for_tree_index(idx); + tchunkptr rst = 0; /* The deepest untaken right subtree */ + for (;;) { + tchunkptr rt; + size_t trem = chunksize(t) - nb; + if (trem < rsize) { + v = t; + if ((rsize = trem) == 0) + break; + } + rt = t->child[1]; + t = t->child[(sizebits >> (SIZE_T_BITSIZE-SIZE_T_ONE)) & 1]; + if (rt != 0 && rt != t) + rst = rt; + if (t == 0) { + t = rst; /* set t to least subtree holding sizes > nb */ + break; + } + sizebits <<= 1; + } + } + if (t == 0 && v == 0) { /* set t to root of next non-empty treebin */ + binmap_t leftbits = left_bits(idx2bit(idx)) & m->treemap; + if (leftbits != 0) { + bindex_t i; + binmap_t leastbit = least_bit(leftbits); + compute_bit2idx(leastbit, i); + t = *treebin_at(m, i); + } + } + + while (t != 0) { /* find smallest of tree or subtree */ + size_t trem = chunksize(t) - nb; + if (trem < rsize) { + rsize = trem; + v = t; + } + t = leftmost_child(t); + } + + /* If dv is a better fit, return 0 so malloc will use it */ + if (v != 0 && rsize < (size_t)(m->dvsize - nb)) { + if (RTCHECK(ok_address(m, v))) { /* split */ + mchunkptr r = chunk_plus_offset(v, nb); + assert(chunksize(v) == rsize + nb); + if (RTCHECK(ok_next(v, r))) { + unlink_large_chunk(m, v); + if (rsize < MIN_CHUNK_SIZE) + set_inuse_and_pinuse(m, v, (rsize + nb)); + else { + set_size_and_pinuse_of_inuse_chunk(m, v, nb); + set_size_and_pinuse_of_free_chunk(r, rsize); + insert_chunk(m, r, rsize); + } + return chunk2mem(v); + } + } + CORRUPTION_ERROR_ACTION(m); + } + return 0; +} + +/* allocate a small request from the best fitting chunk in a treebin */ +static void* tmalloc_small(mstate m, size_t nb) { + tchunkptr t, v; + size_t rsize; + bindex_t i; + binmap_t leastbit = least_bit(m->treemap); + compute_bit2idx(leastbit, i); + v = t = *treebin_at(m, i); + rsize = chunksize(t) - nb; + + while ((t = leftmost_child(t)) != 0) { + size_t trem = chunksize(t) - nb; + if (trem < rsize) { + rsize = trem; + v = t; + } + } + + if (RTCHECK(ok_address(m, v))) { + mchunkptr r = chunk_plus_offset(v, nb); + assert(chunksize(v) == rsize + nb); + if (RTCHECK(ok_next(v, r))) { + unlink_large_chunk(m, v); + if (rsize < MIN_CHUNK_SIZE) + set_inuse_and_pinuse(m, v, (rsize + nb)); + else { + set_size_and_pinuse_of_inuse_chunk(m, v, nb); + set_size_and_pinuse_of_free_chunk(r, rsize); + replace_dv(m, r, rsize); + } + return chunk2mem(v); + } + } + + CORRUPTION_ERROR_ACTION(m); + return 0; +} + +#if !ONLY_MSPACES + +void* dlmalloc(size_t bytes) { + /* + Basic algorithm: + If a small request (< 256 bytes minus per-chunk overhead): + 1. If one exists, use a remainderless chunk in associated smallbin. + (Remainderless means that there are too few excess bytes to + represent as a chunk.) + 2. If it is big enough, use the dv chunk, which is normally the + chunk adjacent to the one used for the most recent small request. + 3. If one exists, split the smallest available chunk in a bin, + saving remainder in dv. + 4. If it is big enough, use the top chunk. + 5. If available, get memory from system and use it + Otherwise, for a large request: + 1. Find the smallest available binned chunk that fits, and use it + if it is better fitting than dv chunk, splitting if necessary. + 2. If better fitting than any binned chunk, use the dv chunk. + 3. If it is big enough, use the top chunk. + 4. If request size >= mmap threshold, try to directly mmap this chunk. + 5. If available, get memory from system and use it + + The ugly goto's here ensure that postaction occurs along all paths. + */ + +#if USE_LOCKS + ensure_initialization(); /* initialize in sys_alloc if not using locks */ +#endif + + if (!PREACTION(gm)) { + void* mem; + size_t nb; + if (bytes <= MAX_SMALL_REQUEST) { + bindex_t idx; + binmap_t smallbits; + nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); + idx = small_index(nb); + smallbits = gm->smallmap >> idx; + + if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ + mchunkptr b, p; + idx += ~smallbits & 1; /* Uses next bin if idx empty */ + b = smallbin_at(gm, idx); + p = b->fd; + assert(chunksize(p) == small_index2size(idx)); + unlink_first_small_chunk(gm, b, p, idx); + set_inuse_and_pinuse(gm, p, small_index2size(idx)); + mem = chunk2mem(p); + check_malloced_chunk(gm, mem, nb); + goto postaction; + } + + else if (nb > gm->dvsize) { + if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ + mchunkptr b, p, r; + size_t rsize; + bindex_t i; + binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); + binmap_t leastbit = least_bit(leftbits); + compute_bit2idx(leastbit, i); + b = smallbin_at(gm, i); + p = b->fd; + assert(chunksize(p) == small_index2size(i)); + unlink_first_small_chunk(gm, b, p, i); + rsize = small_index2size(i) - nb; + /* Fit here cannot be remainderless if 4byte sizes */ + if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) + set_inuse_and_pinuse(gm, p, small_index2size(i)); + else { + set_size_and_pinuse_of_inuse_chunk(gm, p, nb); + r = chunk_plus_offset(p, nb); + set_size_and_pinuse_of_free_chunk(r, rsize); + replace_dv(gm, r, rsize); + } + mem = chunk2mem(p); + check_malloced_chunk(gm, mem, nb); + goto postaction; + } + + else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) { + check_malloced_chunk(gm, mem, nb); + goto postaction; + } + } + } + else if (bytes >= MAX_REQUEST) + nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ + else { + nb = pad_request(bytes); + if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) { + check_malloced_chunk(gm, mem, nb); + goto postaction; + } + } + + if (nb <= gm->dvsize) { + size_t rsize = gm->dvsize - nb; + mchunkptr p = gm->dv; + if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ + mchunkptr r = gm->dv = chunk_plus_offset(p, nb); + gm->dvsize = rsize; + set_size_and_pinuse_of_free_chunk(r, rsize); + set_size_and_pinuse_of_inuse_chunk(gm, p, nb); + } + else { /* exhaust dv */ + size_t dvs = gm->dvsize; + gm->dvsize = 0; + gm->dv = 0; + set_inuse_and_pinuse(gm, p, dvs); + } + mem = chunk2mem(p); + check_malloced_chunk(gm, mem, nb); + goto postaction; + } + + else if (nb < gm->topsize) { /* Split top */ + size_t rsize = gm->topsize -= nb; + mchunkptr p = gm->top; + mchunkptr r = gm->top = chunk_plus_offset(p, nb); + r->head = rsize | PINUSE_BIT; + set_size_and_pinuse_of_inuse_chunk(gm, p, nb); + mem = chunk2mem(p); + check_top_chunk(gm, gm->top); + check_malloced_chunk(gm, mem, nb); + goto postaction; + } + + mem = sys_alloc(gm, nb); + + postaction: + POSTACTION(gm); + return mem; + } + + return 0; +} + +/* ---------------------------- free --------------------------- */ + +void dlfree(void* mem) { + /* + Consolidate freed chunks with preceeding or succeeding bordering + free chunks, if they exist, and then place in a bin. Intermixed + with special cases for top, dv, mmapped chunks, and usage errors. + */ + + if (mem != 0) { + mchunkptr p = mem2chunk(mem); +#if FOOTERS + mstate fm = get_mstate_for(p); + if (!ok_magic(fm)) { + USAGE_ERROR_ACTION(fm, p); + return; + } +#else /* FOOTERS */ +#define fm gm +#endif /* FOOTERS */ + if (!PREACTION(fm)) { + check_inuse_chunk(fm, p); + if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) { + size_t psize = chunksize(p); + mchunkptr next = chunk_plus_offset(p, psize); + if (!pinuse(p)) { + size_t prevsize = p->prev_foot; + if (is_mmapped(p)) { + psize += prevsize + MMAP_FOOT_PAD; + if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) + fm->footprint -= psize; + goto postaction; + } + else { + mchunkptr prev = chunk_minus_offset(p, prevsize); + psize += prevsize; + p = prev; + if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ + if (p != fm->dv) { + unlink_chunk(fm, p, prevsize); + } + else if ((next->head & INUSE_BITS) == INUSE_BITS) { + fm->dvsize = psize; + set_free_with_pinuse(p, psize, next); + goto postaction; + } + } + else + goto erroraction; + } + } + + if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { + if (!cinuse(next)) { /* consolidate forward */ + if (next == fm->top) { + size_t tsize = fm->topsize += psize; + fm->top = p; + p->head = tsize | PINUSE_BIT; + if (p == fm->dv) { + fm->dv = 0; + fm->dvsize = 0; + } + if (should_trim(fm, tsize)) + sys_trim(fm, 0); + goto postaction; + } + else if (next == fm->dv) { + size_t dsize = fm->dvsize += psize; + fm->dv = p; + set_size_and_pinuse_of_free_chunk(p, dsize); + goto postaction; + } + else { + size_t nsize = chunksize(next); + psize += nsize; + unlink_chunk(fm, next, nsize); + set_size_and_pinuse_of_free_chunk(p, psize); + if (p == fm->dv) { + fm->dvsize = psize; + goto postaction; + } + } + } + else + set_free_with_pinuse(p, psize, next); + + if (is_small(psize)) { + insert_small_chunk(fm, p, psize); + check_free_chunk(fm, p); + } + else { + tchunkptr tp = (tchunkptr)p; + insert_large_chunk(fm, tp, psize); + check_free_chunk(fm, p); + if (--fm->release_checks == 0) + release_unused_segments(fm); + } + goto postaction; + } + } + erroraction: + USAGE_ERROR_ACTION(fm, p); + postaction: + POSTACTION(fm); + } + } +#if !FOOTERS +#undef fm +#endif /* FOOTERS */ +} + +void* dlcalloc(size_t n_elements, size_t elem_size) { + void* mem; + size_t req = 0; + if (n_elements != 0) { + req = n_elements * elem_size; + if (((n_elements | elem_size) & ~(size_t)0xffff) && + (req / n_elements != elem_size)) + req = MAX_SIZE_T; /* force downstream failure on overflow */ + } + mem = dlmalloc(req); + if (mem != 0 && calloc_must_clear(mem2chunk(mem))) + memset(mem, 0, req); + return mem; +} + +#endif /* !ONLY_MSPACES */ + +/* ------------ Internal support for realloc, memalign, etc -------------- */ + +/* Try to realloc; only in-place unless can_move true */ +static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb, + int can_move) { + mchunkptr newp = 0; + size_t oldsize = chunksize(p); + mchunkptr next = chunk_plus_offset(p, oldsize); + if (RTCHECK(ok_address(m, p) && ok_inuse(p) && + ok_next(p, next) && ok_pinuse(next))) { + if (is_mmapped(p)) { + newp = mmap_resize(m, p, nb, can_move); + } + else if (oldsize >= nb) { /* already big enough */ + size_t rsize = oldsize - nb; + if (rsize >= MIN_CHUNK_SIZE) { /* split off remainder */ + mchunkptr r = chunk_plus_offset(p, nb); + set_inuse(m, p, nb); + set_inuse(m, r, rsize); + dispose_chunk(m, r, rsize); + } + newp = p; + } + else if (next == m->top) { /* extend into top */ + if (oldsize + m->topsize > nb) { + size_t newsize = oldsize + m->topsize; + size_t newtopsize = newsize - nb; + mchunkptr newtop = chunk_plus_offset(p, nb); + set_inuse(m, p, nb); + newtop->head = newtopsize |PINUSE_BIT; + m->top = newtop; + m->topsize = newtopsize; + newp = p; + } + } + else if (next == m->dv) { /* extend into dv */ + size_t dvs = m->dvsize; + if (oldsize + dvs >= nb) { + size_t dsize = oldsize + dvs - nb; + if (dsize >= MIN_CHUNK_SIZE) { + mchunkptr r = chunk_plus_offset(p, nb); + mchunkptr n = chunk_plus_offset(r, dsize); + set_inuse(m, p, nb); + set_size_and_pinuse_of_free_chunk(r, dsize); + clear_pinuse(n); + m->dvsize = dsize; + m->dv = r; + } + else { /* exhaust dv */ + size_t newsize = oldsize + dvs; + set_inuse(m, p, newsize); + m->dvsize = 0; + m->dv = 0; + } + newp = p; + } + } + else if (!cinuse(next)) { /* extend into next free chunk */ + size_t nextsize = chunksize(next); + if (oldsize + nextsize >= nb) { + size_t rsize = oldsize + nextsize - nb; + unlink_chunk(m, next, nextsize); + if (rsize < MIN_CHUNK_SIZE) { + size_t newsize = oldsize + nextsize; + set_inuse(m, p, newsize); + } + else { + mchunkptr r = chunk_plus_offset(p, nb); + set_inuse(m, p, nb); + set_inuse(m, r, rsize); + dispose_chunk(m, r, rsize); + } + newp = p; + } + } + } + else { + USAGE_ERROR_ACTION(m, chunk2mem(p)); + } + return newp; +} + +static void* internal_memalign(mstate m, size_t alignment, size_t bytes) { + void* mem = 0; + if (alignment < MIN_CHUNK_SIZE) /* must be at least a minimum chunk size */ + alignment = MIN_CHUNK_SIZE; + if ((alignment & (alignment-SIZE_T_ONE)) != 0) {/* Ensure a power of 2 */ + size_t a = MALLOC_ALIGNMENT << 1; + while (a < alignment) a <<= 1; + alignment = a; + } + if (bytes >= MAX_REQUEST - alignment) { + if (m != 0) { /* Test isn't needed but avoids compiler warning */ + MALLOC_FAILURE_ACTION; + } + } + else { + size_t nb = request2size(bytes); + size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD; + mem = internal_malloc(m, req); + if (mem != 0) { + mchunkptr p = mem2chunk(mem); + if (PREACTION(m)) + return 0; + if ((((size_t)(mem)) & (alignment - 1)) != 0) { /* misaligned */ + /* + Find an aligned spot inside chunk. Since we need to give + back leading space in a chunk of at least MIN_CHUNK_SIZE, if + the first calculation places us at a spot with less than + MIN_CHUNK_SIZE leader, we can move to the next aligned spot. + We've allocated enough total room so that this is always + possible. + */ + char* br = (char*)mem2chunk((size_t)(((size_t)((char*)mem + alignment - + SIZE_T_ONE)) & + -alignment)); + char* pos = ((size_t)(br - (char*)(p)) >= MIN_CHUNK_SIZE)? + br : br+alignment; + mchunkptr newp = (mchunkptr)pos; + size_t leadsize = pos - (char*)(p); + size_t newsize = chunksize(p) - leadsize; + + if (is_mmapped(p)) { /* For mmapped chunks, just adjust offset */ + newp->prev_foot = p->prev_foot + leadsize; + newp->head = newsize; + } + else { /* Otherwise, give back leader, use the rest */ + set_inuse(m, newp, newsize); + set_inuse(m, p, leadsize); + dispose_chunk(m, p, leadsize); + } + p = newp; + } + + /* Give back spare room at the end */ + if (!is_mmapped(p)) { + size_t size = chunksize(p); + if (size > nb + MIN_CHUNK_SIZE) { + size_t remainder_size = size - nb; + mchunkptr remainder = chunk_plus_offset(p, nb); + set_inuse(m, p, nb); + set_inuse(m, remainder, remainder_size); + dispose_chunk(m, remainder, remainder_size); + } + } + + mem = chunk2mem(p); + assert (chunksize(p) >= nb); + assert(((size_t)mem & (alignment - 1)) == 0); + check_inuse_chunk(m, p); + POSTACTION(m); + } + } + return mem; +} + +/* + Common support for independent_X routines, handling + all of the combinations that can result. + The opts arg has: + bit 0 set if all elements are same size (using sizes[0]) + bit 1 set if elements should be zeroed +*/ +static void** ialloc(mstate m, + size_t n_elements, + size_t* sizes, + int opts, + void* chunks[]) { + + size_t element_size; /* chunksize of each element, if all same */ + size_t contents_size; /* total size of elements */ + size_t array_size; /* request size of pointer array */ + void* mem; /* malloced aggregate space */ + mchunkptr p; /* corresponding chunk */ + size_t remainder_size; /* remaining bytes while splitting */ + void** marray; /* either "chunks" or malloced ptr array */ + mchunkptr array_chunk; /* chunk for malloced ptr array */ + flag_t was_enabled; /* to disable mmap */ + size_t size; + size_t i; + + ensure_initialization(); + /* compute array length, if needed */ + if (chunks != 0) { + if (n_elements == 0) + return chunks; /* nothing to do */ + marray = chunks; + array_size = 0; + } + else { + /* if empty req, must still return chunk representing empty array */ + if (n_elements == 0) + return (void**)internal_malloc(m, 0); + marray = 0; + array_size = request2size(n_elements * (sizeof(void*))); + } + + /* compute total element size */ + if (opts & 0x1) { /* all-same-size */ + element_size = request2size(*sizes); + contents_size = n_elements * element_size; + } + else { /* add up all the sizes */ + element_size = 0; + contents_size = 0; + for (i = 0; i != n_elements; ++i) + contents_size += request2size(sizes[i]); + } + + size = contents_size + array_size; + + /* + Allocate the aggregate chunk. First disable direct-mmapping so + malloc won't use it, since we would not be able to later + free/realloc space internal to a segregated mmap region. + */ + was_enabled = use_mmap(m); + disable_mmap(m); + mem = internal_malloc(m, size - CHUNK_OVERHEAD); + if (was_enabled) + enable_mmap(m); + if (mem == 0) + return 0; + + if (PREACTION(m)) return 0; + p = mem2chunk(mem); + remainder_size = chunksize(p); + + assert(!is_mmapped(p)); + + if (opts & 0x2) { /* optionally clear the elements */ + memset((size_t*)mem, 0, remainder_size - SIZE_T_SIZE - array_size); + } + + /* If not provided, allocate the pointer array as final part of chunk */ + if (marray == 0) { + size_t array_chunk_size; + array_chunk = chunk_plus_offset(p, contents_size); + array_chunk_size = remainder_size - contents_size; + marray = (void**) (chunk2mem(array_chunk)); + set_size_and_pinuse_of_inuse_chunk(m, array_chunk, array_chunk_size); + remainder_size = contents_size; + } + + /* split out elements */ + for (i = 0; ; ++i) { + marray[i] = chunk2mem(p); + if (i != n_elements-1) { + if (element_size != 0) + size = element_size; + else + size = request2size(sizes[i]); + remainder_size -= size; + set_size_and_pinuse_of_inuse_chunk(m, p, size); + p = chunk_plus_offset(p, size); + } + else { /* the final element absorbs any overallocation slop */ + set_size_and_pinuse_of_inuse_chunk(m, p, remainder_size); + break; + } + } + +#if DEBUG + if (marray != chunks) { + /* final element must have exactly exhausted chunk */ + if (element_size != 0) { + assert(remainder_size == element_size); + } + else { + assert(remainder_size == request2size(sizes[i])); + } + check_inuse_chunk(m, mem2chunk(marray)); + } + for (i = 0; i != n_elements; ++i) + check_inuse_chunk(m, mem2chunk(marray[i])); + +#endif /* DEBUG */ + + POSTACTION(m); + return marray; +} + +/* Try to free all pointers in the given array. + Note: this could be made faster, by delaying consolidation, + at the price of disabling some user integrity checks, We + still optimize some consolidations by combining adjacent + chunks before freeing, which will occur often if allocated + with ialloc or the array is sorted. +*/ +static size_t internal_bulk_free(mstate m, void* array[], size_t nelem) { + size_t unfreed = 0; + if (!PREACTION(m)) { + void** a; + void** fence = &(array[nelem]); + for (a = array; a != fence; ++a) { + void* mem = *a; + if (mem != 0) { + mchunkptr p = mem2chunk(mem); + size_t psize = chunksize(p); +#if FOOTERS + if (get_mstate_for(p) != m) { + ++unfreed; + continue; + } +#endif + check_inuse_chunk(m, p); + *a = 0; + if (RTCHECK(ok_address(m, p) && ok_inuse(p))) { + void ** b = a + 1; /* try to merge with next chunk */ + mchunkptr next = next_chunk(p); + if (b != fence && *b == chunk2mem(next)) { + size_t newsize = chunksize(next) + psize; + set_inuse(m, p, newsize); + *b = chunk2mem(p); + } + else + dispose_chunk(m, p, psize); + } + else { + CORRUPTION_ERROR_ACTION(m); + break; + } + } + } + if (should_trim(m, m->topsize)) + sys_trim(m, 0); + POSTACTION(m); + } + return unfreed; +} + +/* Traversal */ +#if MALLOC_INSPECT_ALL +static void internal_inspect_all(mstate m, + void(*handler)(void *start, + void *end, + size_t used_bytes, + void* callback_arg), + void* arg) { + if (is_initialized(m)) { + mchunkptr top = m->top; + msegmentptr s; + for (s = &m->seg; s != 0; s = s->next) { + mchunkptr q = align_as_chunk(s->base); + while (segment_holds(s, q) && q->head != FENCEPOST_HEAD) { + mchunkptr next = next_chunk(q); + size_t sz = chunksize(q); + size_t used; + void* start; + if (is_inuse(q)) { + used = sz - CHUNK_OVERHEAD; /* must not be mmapped */ + start = chunk2mem(q); + } + else { + used = 0; + if (is_small(sz)) { /* offset by possible bookkeeping */ + start = (void*)((char*)q + sizeof(struct malloc_chunk)); + } + else { + start = (void*)((char*)q + sizeof(struct malloc_tree_chunk)); + } + } + if (start < (void*)next) /* skip if all space is bookkeeping */ + handler(start, next, used, arg); + if (q == top) + break; + q = next; + } + } + } +} +#endif /* MALLOC_INSPECT_ALL */ + +/* ------------------ Exported realloc, memalign, etc -------------------- */ + +#if !ONLY_MSPACES + +void* dlrealloc(void* oldmem, size_t bytes) { + void* mem = 0; + if (oldmem == 0) { + mem = dlmalloc(bytes); + } + else if (bytes >= MAX_REQUEST) { + MALLOC_FAILURE_ACTION; + } +#ifdef REALLOC_ZERO_BYTES_FREES + else if (bytes == 0) { + dlfree(oldmem); + } +#endif /* REALLOC_ZERO_BYTES_FREES */ + else { + size_t nb = request2size(bytes); + mchunkptr oldp = mem2chunk(oldmem); +#if ! FOOTERS + mstate m = gm; +#else /* FOOTERS */ + mstate m = get_mstate_for(oldp); + if (!ok_magic(m)) { + USAGE_ERROR_ACTION(m, oldmem); + return 0; + } +#endif /* FOOTERS */ + if (!PREACTION(m)) { + mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1); + POSTACTION(m); + if (newp != 0) { + check_inuse_chunk(m, newp); + mem = chunk2mem(newp); + } + else { + mem = internal_malloc(m, bytes); + if (mem != 0) { + size_t oc = chunksize(oldp) - overhead_for(oldp); + memcpy(mem, oldmem, (oc < bytes)? oc : bytes); + internal_free(m, oldmem); + } + } + } + } + return mem; +} + +void* dlrealloc_in_place(void* oldmem, size_t bytes) { + void* mem = 0; + if (oldmem != 0) { + if (bytes >= MAX_REQUEST) { + MALLOC_FAILURE_ACTION; + } + else { + size_t nb = request2size(bytes); + mchunkptr oldp = mem2chunk(oldmem); +#if ! FOOTERS + mstate m = gm; +#else /* FOOTERS */ + mstate m = get_mstate_for(oldp); + if (!ok_magic(m)) { + USAGE_ERROR_ACTION(m, oldmem); + return 0; + } +#endif /* FOOTERS */ + if (!PREACTION(m)) { + mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0); + POSTACTION(m); + if (newp == oldp) { + check_inuse_chunk(m, newp); + mem = oldmem; + } + } + } + } + return mem; +} + +void* dlmemalign(size_t alignment, size_t bytes) { + if (alignment <= MALLOC_ALIGNMENT) { + return dlmalloc(bytes); + } + return internal_memalign(gm, alignment, bytes); +} + +int dlposix_memalign(void** pp, size_t alignment, size_t bytes) { + void* mem = 0; + if (alignment == MALLOC_ALIGNMENT) + mem = dlmalloc(bytes); + else { + size_t d = alignment / sizeof(void*); + size_t r = alignment % sizeof(void*); + if (r != 0 || d == 0 || (d & (d-SIZE_T_ONE)) != 0) + return EINVAL; + else if (bytes <= MAX_REQUEST - alignment) { + if (alignment < MIN_CHUNK_SIZE) + alignment = MIN_CHUNK_SIZE; + mem = internal_memalign(gm, alignment, bytes); + } + } + if (mem == 0) + return ENOMEM; + else { + *pp = mem; + return 0; + } +} + +void* dlvalloc(size_t bytes) { + size_t pagesz; + ensure_initialization(); + pagesz = mparams.page_size; + return dlmemalign(pagesz, bytes); +} + +void* dlpvalloc(size_t bytes) { + size_t pagesz; + ensure_initialization(); + pagesz = mparams.page_size; + return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE)); +} + +void** dlindependent_calloc(size_t n_elements, size_t elem_size, + void* chunks[]) { + size_t sz = elem_size; /* serves as 1-element array */ + return ialloc(gm, n_elements, &sz, 3, chunks); +} + +void** dlindependent_comalloc(size_t n_elements, size_t sizes[], + void* chunks[]) { + return ialloc(gm, n_elements, sizes, 0, chunks); +} + +size_t dlbulk_free(void* array[], size_t nelem) { + return internal_bulk_free(gm, array, nelem); +} + +#if MALLOC_INSPECT_ALL +void dlmalloc_inspect_all(void(*handler)(void *start, + void *end, + size_t used_bytes, + void* callback_arg), + void* arg) { + ensure_initialization(); + if (!PREACTION(gm)) { + internal_inspect_all(gm, handler, arg); + POSTACTION(gm); + } +} +#endif /* MALLOC_INSPECT_ALL */ + +int dlmalloc_trim(size_t pad) { + int result = 0; + ensure_initialization(); + if (!PREACTION(gm)) { + result = sys_trim(gm, pad); + POSTACTION(gm); + } + return result; +} + +size_t dlmalloc_footprint(void) { + return gm->footprint; +} + +size_t dlmalloc_max_footprint(void) { + return gm->max_footprint; +} + +size_t dlmalloc_footprint_limit(void) { + size_t maf = gm->footprint_limit; + return maf == 0 ? MAX_SIZE_T : maf; +} + +size_t dlmalloc_set_footprint_limit(size_t bytes) { + size_t result; /* invert sense of 0 */ + if (bytes == 0) + result = granularity_align(1); /* Use minimal size */ + if (bytes == MAX_SIZE_T) + result = 0; /* disable */ + else + result = granularity_align(bytes); + return gm->footprint_limit = result; +} + +#if !NO_MALLINFO +struct mallinfo dlmallinfo(void) { + return internal_mallinfo(gm); +} +#endif /* NO_MALLINFO */ + +#if !NO_MALLOC_STATS +void dlmalloc_stats() { + internal_malloc_stats(gm); +} +#endif /* NO_MALLOC_STATS */ + +int dlmallopt(int param_number, int value) { + return change_mparam(param_number, value); +} + +size_t dlmalloc_usable_size(void* mem) { + if (mem != 0) { + mchunkptr p = mem2chunk(mem); + if (is_inuse(p)) + return chunksize(p) - overhead_for(p); + } + return 0; +} + +#endif /* !ONLY_MSPACES */ + +/* ----------------------------- user mspaces ---------------------------- */ + +#if MSPACES + +static mstate init_user_mstate(char* tbase, size_t tsize) { + size_t msize = pad_request(sizeof(struct malloc_state)); + mchunkptr mn; + mchunkptr msp = align_as_chunk(tbase); + mstate m = (mstate)(chunk2mem(msp)); + memset(m, 0, msize); + (void)INITIAL_LOCK(&m->mutex); + msp->head = (msize|INUSE_BITS); + m->seg.base = m->least_addr = tbase; + m->seg.size = m->footprint = m->max_footprint = tsize; + m->magic = mparams.magic; + m->release_checks = MAX_RELEASE_CHECK_RATE; + m->mflags = mparams.default_mflags; + m->extp = 0; + m->exts = 0; + disable_contiguous(m); + init_bins(m); + mn = next_chunk(mem2chunk(m)); + init_top(m, mn, (size_t)((tbase + tsize) - (char*)mn) - TOP_FOOT_SIZE); + check_top_chunk(m, m->top); + return m; +} + +mspace create_mspace(size_t capacity, int locked) { + mstate m = 0; + size_t msize; + ensure_initialization(); + msize = pad_request(sizeof(struct malloc_state)); + if (capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { + size_t rs = ((capacity == 0)? mparams.granularity : + (capacity + TOP_FOOT_SIZE + msize)); + size_t tsize = granularity_align(rs); + char* tbase = (char*)(CALL_MMAP(tsize)); + if (tbase != CMFAIL) { + m = init_user_mstate(tbase, tsize); + m->seg.sflags = USE_MMAP_BIT; + set_lock(m, locked); + } + } + return (mspace)m; +} + +mspace create_mspace_with_base(void* base, size_t capacity, int locked) { + mstate m = 0; + size_t msize; + ensure_initialization(); + msize = pad_request(sizeof(struct malloc_state)); + if (capacity > msize + TOP_FOOT_SIZE && + capacity < (size_t) -(msize + TOP_FOOT_SIZE + mparams.page_size)) { + m = init_user_mstate((char*)base, capacity); + m->seg.sflags = EXTERN_BIT; + set_lock(m, locked); + } + return (mspace)m; +} + +int mspace_track_large_chunks(mspace msp, int enable) { + int ret = 0; + mstate ms = (mstate)msp; + if (!PREACTION(ms)) { + if (!use_mmap(ms)) { + ret = 1; + } + if (!enable) { + enable_mmap(ms); + } else { + disable_mmap(ms); + } + POSTACTION(ms); + } + return ret; +} + +size_t destroy_mspace(mspace msp) { + size_t freed = 0; + mstate ms = (mstate)msp; + if (ok_magic(ms)) { + msegmentptr sp = &ms->seg; + (void)DESTROY_LOCK(&ms->mutex); /* destroy before unmapped */ + while (sp != 0) { + char* base = sp->base; + size_t size = sp->size; + flag_t flag = sp->sflags; + (void)base; /* placate people compiling -Wunused-variable */ + sp = sp->next; + if ((flag & USE_MMAP_BIT) && !(flag & EXTERN_BIT) && + CALL_MUNMAP(base, size) == 0) + freed += size; + } + } + else { + USAGE_ERROR_ACTION(ms,ms); + } + return freed; +} + +/* + mspace versions of routines are near-clones of the global + versions. This is not so nice but better than the alternatives. +*/ + +void* mspace_malloc(mspace msp, size_t bytes) { + mstate ms = (mstate)msp; + if (!ok_magic(ms)) { + USAGE_ERROR_ACTION(ms,ms); + return 0; + } + if (!PREACTION(ms)) { + void* mem; + size_t nb; + if (bytes <= MAX_SMALL_REQUEST) { + bindex_t idx; + binmap_t smallbits; + nb = (bytes < MIN_REQUEST)? MIN_CHUNK_SIZE : pad_request(bytes); + idx = small_index(nb); + smallbits = ms->smallmap >> idx; + + if ((smallbits & 0x3U) != 0) { /* Remainderless fit to a smallbin. */ + mchunkptr b, p; + idx += ~smallbits & 1; /* Uses next bin if idx empty */ + b = smallbin_at(ms, idx); + p = b->fd; + assert(chunksize(p) == small_index2size(idx)); + unlink_first_small_chunk(ms, b, p, idx); + set_inuse_and_pinuse(ms, p, small_index2size(idx)); + mem = chunk2mem(p); + check_malloced_chunk(ms, mem, nb); + goto postaction; + } + + else if (nb > ms->dvsize) { + if (smallbits != 0) { /* Use chunk in next nonempty smallbin */ + mchunkptr b, p, r; + size_t rsize; + bindex_t i; + binmap_t leftbits = (smallbits << idx) & left_bits(idx2bit(idx)); + binmap_t leastbit = least_bit(leftbits); + compute_bit2idx(leastbit, i); + b = smallbin_at(ms, i); + p = b->fd; + assert(chunksize(p) == small_index2size(i)); + unlink_first_small_chunk(ms, b, p, i); + rsize = small_index2size(i) - nb; + /* Fit here cannot be remainderless if 4byte sizes */ + if (SIZE_T_SIZE != 4 && rsize < MIN_CHUNK_SIZE) + set_inuse_and_pinuse(ms, p, small_index2size(i)); + else { + set_size_and_pinuse_of_inuse_chunk(ms, p, nb); + r = chunk_plus_offset(p, nb); + set_size_and_pinuse_of_free_chunk(r, rsize); + replace_dv(ms, r, rsize); + } + mem = chunk2mem(p); + check_malloced_chunk(ms, mem, nb); + goto postaction; + } + + else if (ms->treemap != 0 && (mem = tmalloc_small(ms, nb)) != 0) { + check_malloced_chunk(ms, mem, nb); + goto postaction; + } + } + } + else if (bytes >= MAX_REQUEST) + nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ + else { + nb = pad_request(bytes); + if (ms->treemap != 0 && (mem = tmalloc_large(ms, nb)) != 0) { + check_malloced_chunk(ms, mem, nb); + goto postaction; + } + } + + if (nb <= ms->dvsize) { + size_t rsize = ms->dvsize - nb; + mchunkptr p = ms->dv; + if (rsize >= MIN_CHUNK_SIZE) { /* split dv */ + mchunkptr r = ms->dv = chunk_plus_offset(p, nb); + ms->dvsize = rsize; + set_size_and_pinuse_of_free_chunk(r, rsize); + set_size_and_pinuse_of_inuse_chunk(ms, p, nb); + } + else { /* exhaust dv */ + size_t dvs = ms->dvsize; + ms->dvsize = 0; + ms->dv = 0; + set_inuse_and_pinuse(ms, p, dvs); + } + mem = chunk2mem(p); + check_malloced_chunk(ms, mem, nb); + goto postaction; + } + + else if (nb < ms->topsize) { /* Split top */ + size_t rsize = ms->topsize -= nb; + mchunkptr p = ms->top; + mchunkptr r = ms->top = chunk_plus_offset(p, nb); + r->head = rsize | PINUSE_BIT; + set_size_and_pinuse_of_inuse_chunk(ms, p, nb); + mem = chunk2mem(p); + check_top_chunk(ms, ms->top); + check_malloced_chunk(ms, mem, nb); + goto postaction; + } + + mem = sys_alloc(ms, nb); + + postaction: + POSTACTION(ms); + return mem; + } + + return 0; +} + +void mspace_free(mspace msp, void* mem) { + if (mem != 0) { + mchunkptr p = mem2chunk(mem); +#if FOOTERS + mstate fm = get_mstate_for(p); + (void)msp; /* placate people compiling -Wunused */ +#else /* FOOTERS */ + mstate fm = (mstate)msp; +#endif /* FOOTERS */ + if (!ok_magic(fm)) { + USAGE_ERROR_ACTION(fm, p); + return; + } + if (!PREACTION(fm)) { + check_inuse_chunk(fm, p); + if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) { + size_t psize = chunksize(p); + mchunkptr next = chunk_plus_offset(p, psize); + if (!pinuse(p)) { + size_t prevsize = p->prev_foot; + if (is_mmapped(p)) { + psize += prevsize + MMAP_FOOT_PAD; + if (CALL_MUNMAP((char*)p - prevsize, psize) == 0) + fm->footprint -= psize; + goto postaction; + } + else { + mchunkptr prev = chunk_minus_offset(p, prevsize); + psize += prevsize; + p = prev; + if (RTCHECK(ok_address(fm, prev))) { /* consolidate backward */ + if (p != fm->dv) { + unlink_chunk(fm, p, prevsize); + } + else if ((next->head & INUSE_BITS) == INUSE_BITS) { + fm->dvsize = psize; + set_free_with_pinuse(p, psize, next); + goto postaction; + } + } + else + goto erroraction; + } + } + + if (RTCHECK(ok_next(p, next) && ok_pinuse(next))) { + if (!cinuse(next)) { /* consolidate forward */ + if (next == fm->top) { + size_t tsize = fm->topsize += psize; + fm->top = p; + p->head = tsize | PINUSE_BIT; + if (p == fm->dv) { + fm->dv = 0; + fm->dvsize = 0; + } + if (should_trim(fm, tsize)) + sys_trim(fm, 0); + goto postaction; + } + else if (next == fm->dv) { + size_t dsize = fm->dvsize += psize; + fm->dv = p; + set_size_and_pinuse_of_free_chunk(p, dsize); + goto postaction; + } + else { + size_t nsize = chunksize(next); + psize += nsize; + unlink_chunk(fm, next, nsize); + set_size_and_pinuse_of_free_chunk(p, psize); + if (p == fm->dv) { + fm->dvsize = psize; + goto postaction; + } + } + } + else + set_free_with_pinuse(p, psize, next); + + if (is_small(psize)) { + insert_small_chunk(fm, p, psize); + check_free_chunk(fm, p); + } + else { + tchunkptr tp = (tchunkptr)p; + insert_large_chunk(fm, tp, psize); + check_free_chunk(fm, p); + if (--fm->release_checks == 0) + release_unused_segments(fm); + } + goto postaction; + } + } + erroraction: + USAGE_ERROR_ACTION(fm, p); + postaction: + POSTACTION(fm); + } + } +} + +void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size) { + void* mem; + size_t req = 0; + mstate ms = (mstate)msp; + if (!ok_magic(ms)) { + USAGE_ERROR_ACTION(ms,ms); + return 0; + } + if (n_elements != 0) { + req = n_elements * elem_size; + if (((n_elements | elem_size) & ~(size_t)0xffff) && + (req / n_elements != elem_size)) + req = MAX_SIZE_T; /* force downstream failure on overflow */ + } + mem = internal_malloc(ms, req); + if (mem != 0 && calloc_must_clear(mem2chunk(mem))) + memset(mem, 0, req); + return mem; +} + +void* mspace_realloc(mspace msp, void* oldmem, size_t bytes) { + void* mem = 0; + if (oldmem == 0) { + mem = mspace_malloc(msp, bytes); + } + else if (bytes >= MAX_REQUEST) { + MALLOC_FAILURE_ACTION; + } +#ifdef REALLOC_ZERO_BYTES_FREES + else if (bytes == 0) { + mspace_free(msp, oldmem); + } +#endif /* REALLOC_ZERO_BYTES_FREES */ + else { + size_t nb = request2size(bytes); + mchunkptr oldp = mem2chunk(oldmem); +#if ! FOOTERS + mstate m = (mstate)msp; +#else /* FOOTERS */ + mstate m = get_mstate_for(oldp); + if (!ok_magic(m)) { + USAGE_ERROR_ACTION(m, oldmem); + return 0; + } +#endif /* FOOTERS */ + if (!PREACTION(m)) { + mchunkptr newp = try_realloc_chunk(m, oldp, nb, 1); + POSTACTION(m); + if (newp != 0) { + check_inuse_chunk(m, newp); + mem = chunk2mem(newp); + } + else { + mem = mspace_malloc(m, bytes); + if (mem != 0) { + size_t oc = chunksize(oldp) - overhead_for(oldp); + memcpy(mem, oldmem, (oc < bytes)? oc : bytes); + mspace_free(m, oldmem); + } + } + } + } + return mem; +} + +void* mspace_realloc_in_place(mspace msp, void* oldmem, size_t bytes) { + void* mem = 0; + if (oldmem != 0) { + if (bytes >= MAX_REQUEST) { + MALLOC_FAILURE_ACTION; + } + else { + size_t nb = request2size(bytes); + mchunkptr oldp = mem2chunk(oldmem); +#if ! FOOTERS + mstate m = (mstate)msp; +#else /* FOOTERS */ + mstate m = get_mstate_for(oldp); + (void)msp; /* placate people compiling -Wunused */ + if (!ok_magic(m)) { + USAGE_ERROR_ACTION(m, oldmem); + return 0; + } +#endif /* FOOTERS */ + if (!PREACTION(m)) { + mchunkptr newp = try_realloc_chunk(m, oldp, nb, 0); + POSTACTION(m); + if (newp == oldp) { + check_inuse_chunk(m, newp); + mem = oldmem; + } + } + } + } + return mem; +} + +void* mspace_memalign(mspace msp, size_t alignment, size_t bytes) { + mstate ms = (mstate)msp; + if (!ok_magic(ms)) { + USAGE_ERROR_ACTION(ms,ms); + return 0; + } + if (alignment <= MALLOC_ALIGNMENT) + return mspace_malloc(msp, bytes); + return internal_memalign(ms, alignment, bytes); +} + +void** mspace_independent_calloc(mspace msp, size_t n_elements, + size_t elem_size, void* chunks[]) { + size_t sz = elem_size; /* serves as 1-element array */ + mstate ms = (mstate)msp; + if (!ok_magic(ms)) { + USAGE_ERROR_ACTION(ms,ms); + return 0; + } + return ialloc(ms, n_elements, &sz, 3, chunks); +} + +void** mspace_independent_comalloc(mspace msp, size_t n_elements, + size_t sizes[], void* chunks[]) { + mstate ms = (mstate)msp; + if (!ok_magic(ms)) { + USAGE_ERROR_ACTION(ms,ms); + return 0; + } + return ialloc(ms, n_elements, sizes, 0, chunks); +} + +size_t mspace_bulk_free(mspace msp, void* array[], size_t nelem) { + return internal_bulk_free((mstate)msp, array, nelem); +} + +#if MALLOC_INSPECT_ALL +void mspace_inspect_all(mspace msp, + void(*handler)(void *start, + void *end, + size_t used_bytes, + void* callback_arg), + void* arg) { + mstate ms = (mstate)msp; + if (ok_magic(ms)) { + if (!PREACTION(ms)) { + internal_inspect_all(ms, handler, arg); + POSTACTION(ms); + } + } + else { + USAGE_ERROR_ACTION(ms,ms); + } +} +#endif /* MALLOC_INSPECT_ALL */ + +int mspace_trim(mspace msp, size_t pad) { + int result = 0; + mstate ms = (mstate)msp; + if (ok_magic(ms)) { + if (!PREACTION(ms)) { + result = sys_trim(ms, pad); + POSTACTION(ms); + } + } + else { + USAGE_ERROR_ACTION(ms,ms); + } + return result; +} + +#if !NO_MALLOC_STATS +void mspace_malloc_stats(mspace msp) { + mstate ms = (mstate)msp; + if (ok_magic(ms)) { + internal_malloc_stats(ms); + } + else { + USAGE_ERROR_ACTION(ms,ms); + } +} +#endif /* NO_MALLOC_STATS */ + +size_t mspace_footprint(mspace msp) { + size_t result = 0; + mstate ms = (mstate)msp; + if (ok_magic(ms)) { + result = ms->footprint; + } + else { + USAGE_ERROR_ACTION(ms,ms); + } + return result; +} + +size_t mspace_max_footprint(mspace msp) { + size_t result = 0; + mstate ms = (mstate)msp; + if (ok_magic(ms)) { + result = ms->max_footprint; + } + else { + USAGE_ERROR_ACTION(ms,ms); + } + return result; +} + +size_t mspace_footprint_limit(mspace msp) { + size_t result = 0; + mstate ms = (mstate)msp; + if (ok_magic(ms)) { + size_t maf = ms->footprint_limit; + result = (maf == 0) ? MAX_SIZE_T : maf; + } + else { + USAGE_ERROR_ACTION(ms,ms); + } + return result; +} + +size_t mspace_set_footprint_limit(mspace msp, size_t bytes) { + size_t result = 0; + mstate ms = (mstate)msp; + if (ok_magic(ms)) { + if (bytes == 0) + result = granularity_align(1); /* Use minimal size */ + if (bytes == MAX_SIZE_T) + result = 0; /* disable */ + else + result = granularity_align(bytes); + ms->footprint_limit = result; + } + else { + USAGE_ERROR_ACTION(ms,ms); + } + return result; +} + +#if !NO_MALLINFO +struct mallinfo mspace_mallinfo(mspace msp) { + mstate ms = (mstate)msp; + if (!ok_magic(ms)) { + USAGE_ERROR_ACTION(ms,ms); + } + return internal_mallinfo(ms); +} +#endif /* NO_MALLINFO */ + +size_t mspace_usable_size(const void* mem) { + if (mem != 0) { + mchunkptr p = mem2chunk(mem); + if (is_inuse(p)) + return chunksize(p) - overhead_for(p); + } + return 0; +} + +int mspace_mallopt(int param_number, int value) { + return change_mparam(param_number, value); +} + +#endif /* MSPACES */ + + +/* -------------------- Alternative MORECORE functions ------------------- */ + +/* + Guidelines for creating a custom version of MORECORE: + + * For best performance, MORECORE should allocate in multiples of pagesize. + * MORECORE may allocate more memory than requested. (Or even less, + but this will usually result in a malloc failure.) + * MORECORE must not allocate memory when given argument zero, but + instead return one past the end address of memory from previous + nonzero call. + * For best performance, consecutive calls to MORECORE with positive + arguments should return increasing addresses, indicating that + space has been contiguously extended. + * Even though consecutive calls to MORECORE need not return contiguous + addresses, it must be OK for malloc'ed chunks to span multiple + regions in those cases where they do happen to be contiguous. + * MORECORE need not handle negative arguments -- it may instead + just return MFAIL when given negative arguments. + Negative arguments are always multiples of pagesize. MORECORE + must not misinterpret negative args as large positive unsigned + args. You can suppress all such calls from even occurring by defining + MORECORE_CANNOT_TRIM, + + As an example alternative MORECORE, here is a custom allocator + kindly contributed for pre-OSX macOS. It uses virtually but not + necessarily physically contiguous non-paged memory (locked in, + present and won't get swapped out). You can use it by uncommenting + this section, adding some #includes, and setting up the appropriate + defines above: + + #define MORECORE osMoreCore + + There is also a shutdown routine that should somehow be called for + cleanup upon program exit. + + #define MAX_POOL_ENTRIES 100 + #define MINIMUM_MORECORE_SIZE (64 * 1024U) + static int next_os_pool; + void *our_os_pools[MAX_POOL_ENTRIES]; + + void *osMoreCore(int size) + { + void *ptr = 0; + static void *sbrk_top = 0; + + if (size > 0) + { + if (size < MINIMUM_MORECORE_SIZE) + size = MINIMUM_MORECORE_SIZE; + if (CurrentExecutionLevel() == kTaskLevel) + ptr = PoolAllocateResident(size + RM_PAGE_SIZE, 0); + if (ptr == 0) + { + return (void *) MFAIL; + } + // save ptrs so they can be freed during cleanup + our_os_pools[next_os_pool] = ptr; + next_os_pool++; + ptr = (void *) ((((size_t) ptr) + RM_PAGE_MASK) & ~RM_PAGE_MASK); + sbrk_top = (char *) ptr + size; + return ptr; + } + else if (size < 0) + { + // we don't currently support shrink behavior + return (void *) MFAIL; + } + else + { + return sbrk_top; + } + } + + // cleanup any allocated memory pools + // called as last thing before shutting down driver + + void osCleanupMem(void) + { + void **ptr; + + for (ptr = our_os_pools; ptr < &our_os_pools[MAX_POOL_ENTRIES]; ptr++) + if (*ptr) + { + PoolDeallocate(*ptr); + *ptr = 0; + } + } + +*/ + + +/* ----------------------------------------------------------------------- +History: + v2.8.6 Wed Aug 29 06:57:58 2012 Doug Lea + * fix bad comparison in dlposix_memalign + * don't reuse adjusted asize in sys_alloc + * add LOCK_AT_FORK -- thanks to Kirill Artamonov for the suggestion + * reduce compiler warnings -- thanks to all who reported/suggested these + + v2.8.5 Sun May 22 10:26:02 2011 Doug Lea (dl at gee) + * Always perform unlink checks unless INSECURE + * Add posix_memalign. + * Improve realloc to expand in more cases; expose realloc_in_place. + Thanks to Peter Buhr for the suggestion. + * Add footprint_limit, inspect_all, bulk_free. Thanks + to Barry Hayes and others for the suggestions. + * Internal refactorings to avoid calls while holding locks + * Use non-reentrant locks by default. Thanks to Roland McGrath + for the suggestion. + * Small fixes to mspace_destroy, reset_on_error. + * Various configuration extensions/changes. Thanks + to all who contributed these. + + V2.8.4a Thu Apr 28 14:39:43 2011 (dl at gee.cs.oswego.edu) + * Update Creative Commons URL + + V2.8.4 Wed May 27 09:56:23 2009 Doug Lea (dl at gee) + * Use zeros instead of prev foot for is_mmapped + * Add mspace_track_large_chunks; thanks to Jean Brouwers + * Fix set_inuse in internal_realloc; thanks to Jean Brouwers + * Fix insufficient sys_alloc padding when using 16byte alignment + * Fix bad error check in mspace_footprint + * Adaptations for ptmalloc; thanks to Wolfram Gloger. + * Reentrant spin locks; thanks to Earl Chew and others + * Win32 improvements; thanks to Niall Douglas and Earl Chew + * Add NO_SEGMENT_TRAVERSAL and MAX_RELEASE_CHECK_RATE options + * Extension hook in malloc_state + * Various small adjustments to reduce warnings on some compilers + * Various configuration extensions/changes for more platforms. Thanks + to all who contributed these. + + V2.8.3 Thu Sep 22 11:16:32 2005 Doug Lea (dl at gee) + * Add max_footprint functions + * Ensure all appropriate literals are size_t + * Fix conditional compilation problem for some #define settings + * Avoid concatenating segments with the one provided + in create_mspace_with_base + * Rename some variables to avoid compiler shadowing warnings + * Use explicit lock initialization. + * Better handling of sbrk interference. + * Simplify and fix segment insertion, trimming and mspace_destroy + * Reinstate REALLOC_ZERO_BYTES_FREES option from 2.7.x + * Thanks especially to Dennis Flanagan for help on these. + + V2.8.2 Sun Jun 12 16:01:10 2005 Doug Lea (dl at gee) + * Fix memalign brace error. + + V2.8.1 Wed Jun 8 16:11:46 2005 Doug Lea (dl at gee) + * Fix improper #endif nesting in C++ + * Add explicit casts needed for C++ + + V2.8.0 Mon May 30 14:09:02 2005 Doug Lea (dl at gee) + * Use trees for large bins + * Support mspaces + * Use segments to unify sbrk-based and mmap-based system allocation, + removing need for emulation on most platforms without sbrk. + * Default safety checks + * Optional footer checks. Thanks to William Robertson for the idea. + * Internal code refactoring + * Incorporate suggestions and platform-specific changes. + Thanks to Dennis Flanagan, Colin Plumb, Niall Douglas, + Aaron Bachmann, Emery Berger, and others. + * Speed up non-fastbin processing enough to remove fastbins. + * Remove useless cfree() to avoid conflicts with other apps. + * Remove internal memcpy, memset. Compilers handle builtins better. + * Remove some options that no one ever used and rename others. + + V2.7.2 Sat Aug 17 09:07:30 2002 Doug Lea (dl at gee) + * Fix malloc_state bitmap array misdeclaration + + V2.7.1 Thu Jul 25 10:58:03 2002 Doug Lea (dl at gee) + * Allow tuning of FIRST_SORTED_BIN_SIZE + * Use PTR_UINT as type for all ptr->int casts. Thanks to John Belmonte. + * Better detection and support for non-contiguousness of MORECORE. + Thanks to Andreas Mueller, Conal Walsh, and Wolfram Gloger + * Bypass most of malloc if no frees. Thanks To Emery Berger. + * Fix freeing of old top non-contiguous chunk im sysmalloc. + * Raised default trim and map thresholds to 256K. + * Fix mmap-related #defines. Thanks to Lubos Lunak. + * Fix copy macros; added LACKS_FCNTL_H. Thanks to Neal Walfield. + * Branch-free bin calculation + * Default trim and mmap thresholds now 256K. + + V2.7.0 Sun Mar 11 14:14:06 2001 Doug Lea (dl at gee) + * Introduce independent_comalloc and independent_calloc. + Thanks to Michael Pachos for motivation and help. + * Make optional .h file available + * Allow > 2GB requests on 32bit systems. + * new WIN32 sbrk, mmap, munmap, lock code from . + Thanks also to Andreas Mueller , + and Anonymous. + * Allow override of MALLOC_ALIGNMENT (Thanks to Ruud Waij for + helping test this.) + * memalign: check alignment arg + * realloc: don't try to shift chunks backwards, since this + leads to more fragmentation in some programs and doesn't + seem to help in any others. + * Collect all cases in malloc requiring system memory into sysmalloc + * Use mmap as backup to sbrk + * Place all internal state in malloc_state + * Introduce fastbins (although similar to 2.5.1) + * Many minor tunings and cosmetic improvements + * Introduce USE_PUBLIC_MALLOC_WRAPPERS, USE_MALLOC_LOCK + * Introduce MALLOC_FAILURE_ACTION, MORECORE_CONTIGUOUS + Thanks to Tony E. Bennett and others. + * Include errno.h to support default failure action. + + V2.6.6 Sun Dec 5 07:42:19 1999 Doug Lea (dl at gee) + * return null for negative arguments + * Added Several WIN32 cleanups from Martin C. Fong + * Add 'LACKS_SYS_PARAM_H' for those systems without 'sys/param.h' + (e.g. WIN32 platforms) + * Cleanup header file inclusion for WIN32 platforms + * Cleanup code to avoid Microsoft Visual C++ compiler complaints + * Add 'USE_DL_PREFIX' to quickly allow co-existence with existing + memory allocation routines + * Set 'malloc_getpagesize' for WIN32 platforms (needs more work) + * Use 'assert' rather than 'ASSERT' in WIN32 code to conform to + usage of 'assert' in non-WIN32 code + * Improve WIN32 'sbrk()' emulation's 'findRegion()' routine to + avoid infinite loop + * Always call 'fREe()' rather than 'free()' + + V2.6.5 Wed Jun 17 15:57:31 1998 Doug Lea (dl at gee) + * Fixed ordering problem with boundary-stamping + + V2.6.3 Sun May 19 08:17:58 1996 Doug Lea (dl at gee) + * Added pvalloc, as recommended by H.J. Liu + * Added 64bit pointer support mainly from Wolfram Gloger + * Added anonymously donated WIN32 sbrk emulation + * Malloc, calloc, getpagesize: add optimizations from Raymond Nijssen + * malloc_extend_top: fix mask error that caused wastage after + foreign sbrks + * Add linux mremap support code from HJ Liu + + V2.6.2 Tue Dec 5 06:52:55 1995 Doug Lea (dl at gee) + * Integrated most documentation with the code. + * Add support for mmap, with help from + Wolfram Gloger (Gloger@lrz.uni-muenchen.de). + * Use last_remainder in more cases. + * Pack bins using idea from colin@nyx10.cs.du.edu + * Use ordered bins instead of best-fit threshhold + * Eliminate block-local decls to simplify tracing and debugging. + * Support another case of realloc via move into top + * Fix error occuring when initial sbrk_base not word-aligned. + * Rely on page size for units instead of SBRK_UNIT to + avoid surprises about sbrk alignment conventions. + * Add mallinfo, mallopt. Thanks to Raymond Nijssen + (raymond@es.ele.tue.nl) for the suggestion. + * Add `pad' argument to malloc_trim and top_pad mallopt parameter. + * More precautions for cases where other routines call sbrk, + courtesy of Wolfram Gloger (Gloger@lrz.uni-muenchen.de). + * Added macros etc., allowing use in linux libc from + H.J. Lu (hjl@gnu.ai.mit.edu) + * Inverted this history list + + V2.6.1 Sat Dec 2 14:10:57 1995 Doug Lea (dl at gee) + * Re-tuned and fixed to behave more nicely with V2.6.0 changes. + * Removed all preallocation code since under current scheme + the work required to undo bad preallocations exceeds + the work saved in good cases for most test programs. + * No longer use return list or unconsolidated bins since + no scheme using them consistently outperforms those that don't + given above changes. + * Use best fit for very large chunks to prevent some worst-cases. + * Added some support for debugging + + V2.6.0 Sat Nov 4 07:05:23 1995 Doug Lea (dl at gee) + * Removed footers when chunks are in use. Thanks to + Paul Wilson (wilson@cs.texas.edu) for the suggestion. + + V2.5.4 Wed Nov 1 07:54:51 1995 Doug Lea (dl at gee) + * Added malloc_trim, with help from Wolfram Gloger + (wmglo@Dent.MED.Uni-Muenchen.DE). + + V2.5.3 Tue Apr 26 10:16:01 1994 Doug Lea (dl at g) + + V2.5.2 Tue Apr 5 16:20:40 1994 Doug Lea (dl at g) + * realloc: try to expand in both directions + * malloc: swap order of clean-bin strategy; + * realloc: only conditionally expand backwards + * Try not to scavenge used bins + * Use bin counts as a guide to preallocation + * Occasionally bin return list chunks in first scan + * Add a few optimizations from colin@nyx10.cs.du.edu + + V2.5.1 Sat Aug 14 15:40:43 1993 Doug Lea (dl at g) + * faster bin computation & slightly different binning + * merged all consolidations to one part of malloc proper + (eliminating old malloc_find_space & malloc_clean_bin) + * Scan 2 returns chunks (not just 1) + * Propagate failure in realloc if malloc returns 0 + * Add stuff to allow compilation on non-ANSI compilers + from kpv@research.att.com + + V2.5 Sat Aug 7 07:41:59 1993 Doug Lea (dl at g.oswego.edu) + * removed potential for odd address access in prev_chunk + * removed dependency on getpagesize.h + * misc cosmetics and a bit more internal documentation + * anticosmetics: mangled names in macros to evade debugger strangeness + * tested on sparc, hp-700, dec-mips, rs6000 + with gcc & native cc (hp, dec only) allowing + Detlefs & Zorn comparison study (in SIGPLAN Notices.) + + Trial version Fri Aug 28 13:14:29 1992 Doug Lea (dl at g.oswego.edu) + * Based loosely on libg++-1.2X malloc. (It retains some of the overall + structure of old version, but most details differ.) + +*/ diff --git a/include/malloc.h b/include/malloc.h index cb37752b86a..4608082d2e0 100644 --- a/include/malloc.h +++ b/include/malloc.h @@ -7,3 +7,628 @@ */ #include + +#if 0 /* not active yet */ + +/* + Default header file for malloc-2.8.x, written by Doug Lea + and released to the public domain, as explained at + http://creativecommons.org/publicdomain/zero/1.0/ + + This header is for ANSI C/C++ only. You can set any of + the following #defines before including: + + * If USE_DL_PREFIX is defined, it is assumed that malloc.c + was also compiled with this option, so all routines + have names starting with "dl". + + * If HAVE_USR_INCLUDE_MALLOC_H is defined, it is assumed that this + file will be #included AFTER . This is needed only if + your system defines a struct mallinfo that is incompatible with the + standard one declared here. Otherwise, you can include this file + INSTEAD of your system system . At least on ANSI, all + declarations should be compatible with system versions + + * If MSPACES is defined, declarations for mspace versions are included. +*/ + +#ifndef MALLOC_280_H +#define MALLOC_280_H + +#ifdef __cplusplus +extern "C" { +#endif + +#include /* for size_t */ + +#ifndef ONLY_MSPACES +#define ONLY_MSPACES 0 /* define to a value */ +#elif ONLY_MSPACES != 0 +#define ONLY_MSPACES 1 +#endif /* ONLY_MSPACES */ +#ifndef NO_MALLINFO +#define NO_MALLINFO 0 +#endif /* NO_MALLINFO */ + +#ifndef MSPACES +#if ONLY_MSPACES +#define MSPACES 1 +#else /* ONLY_MSPACES */ +#define MSPACES 0 +#endif /* ONLY_MSPACES */ +#endif /* MSPACES */ + +#if !ONLY_MSPACES + +#ifndef USE_DL_PREFIX +#define dlcalloc calloc +#define dlfree free +#define dlmalloc malloc +#define dlmemalign memalign +#define dlposix_memalign posix_memalign +#define dlrealloc realloc +#define dlvalloc valloc +#define dlpvalloc pvalloc +#define dlmallinfo mallinfo +#define dlmallopt mallopt +#define dlmalloc_trim malloc_trim +#define dlmalloc_stats malloc_stats +#define dlmalloc_usable_size malloc_usable_size +#define dlmalloc_footprint malloc_footprint +#define dlmalloc_max_footprint malloc_max_footprint +#define dlmalloc_footprint_limit malloc_footprint_limit +#define dlmalloc_set_footprint_limit malloc_set_footprint_limit +#define dlmalloc_inspect_all malloc_inspect_all +#define dlindependent_calloc independent_calloc +#define dlindependent_comalloc independent_comalloc +#define dlbulk_free bulk_free +#endif /* USE_DL_PREFIX */ + +#if !NO_MALLINFO +#ifndef HAVE_USR_INCLUDE_MALLOC_H +#ifndef _MALLOC_H +#ifndef MALLINFO_FIELD_TYPE +#define MALLINFO_FIELD_TYPE size_t +#endif /* MALLINFO_FIELD_TYPE */ +#ifndef STRUCT_MALLINFO_DECLARED +#define STRUCT_MALLINFO_DECLARED 1 +struct mallinfo { + MALLINFO_FIELD_TYPE arena; /* non-mmapped space allocated from system */ + MALLINFO_FIELD_TYPE ordblks; /* number of free chunks */ + MALLINFO_FIELD_TYPE smblks; /* always 0 */ + MALLINFO_FIELD_TYPE hblks; /* always 0 */ + MALLINFO_FIELD_TYPE hblkhd; /* space in mmapped regions */ + MALLINFO_FIELD_TYPE usmblks; /* maximum total allocated space */ + MALLINFO_FIELD_TYPE fsmblks; /* always 0 */ + MALLINFO_FIELD_TYPE uordblks; /* total allocated space */ + MALLINFO_FIELD_TYPE fordblks; /* total free space */ + MALLINFO_FIELD_TYPE keepcost; /* releasable (via malloc_trim) space */ +}; +#endif /* STRUCT_MALLINFO_DECLARED */ +#endif /* _MALLOC_H */ +#endif /* HAVE_USR_INCLUDE_MALLOC_H */ +#endif /* !NO_MALLINFO */ + +/* + malloc(size_t n) + Returns a pointer to a newly allocated chunk of at least n bytes, or + null if no space is available, in which case errno is set to ENOMEM + on ANSI C systems. + + If n is zero, malloc returns a minimum-sized chunk. (The minimum + size is 16 bytes on most 32bit systems, and 32 bytes on 64bit + systems.) Note that size_t is an unsigned type, so calls with + arguments that would be negative if signed are interpreted as + requests for huge amounts of space, which will often fail. The + maximum supported value of n differs across systems, but is in all + cases less than the maximum representable value of a size_t. +*/ +void* dlmalloc(size_t); + +/* + free(void* p) + Releases the chunk of memory pointed to by p, that had been previously + allocated using malloc or a related routine such as realloc. + It has no effect if p is null. If p was not malloced or already + freed, free(p) will by default cuase the current program to abort. +*/ +void dlfree(void*); + +/* + calloc(size_t n_elements, size_t element_size); + Returns a pointer to n_elements * element_size bytes, with all locations + set to zero. +*/ +void* dlcalloc(size_t, size_t); + +/* + realloc(void* p, size_t n) + Returns a pointer to a chunk of size n that contains the same data + as does chunk p up to the minimum of (n, p's size) bytes, or null + if no space is available. + + The returned pointer may or may not be the same as p. The algorithm + prefers extending p in most cases when possible, otherwise it + employs the equivalent of a malloc-copy-free sequence. + + If p is null, realloc is equivalent to malloc. + + If space is not available, realloc returns null, errno is set (if on + ANSI) and p is NOT freed. + + if n is for fewer bytes than already held by p, the newly unused + space is lopped off and freed if possible. realloc with a size + argument of zero (re)allocates a minimum-sized chunk. + + The old unix realloc convention of allowing the last-free'd chunk + to be used as an argument to realloc is not supported. +*/ +void* dlrealloc(void*, size_t); + +/* + realloc_in_place(void* p, size_t n) + Resizes the space allocated for p to size n, only if this can be + done without moving p (i.e., only if there is adjacent space + available if n is greater than p's current allocated size, or n is + less than or equal to p's size). This may be used instead of plain + realloc if an alternative allocation strategy is needed upon failure + to expand space; for example, reallocation of a buffer that must be + memory-aligned or cleared. You can use realloc_in_place to trigger + these alternatives only when needed. + + Returns p if successful; otherwise null. +*/ +void* dlrealloc_in_place(void*, size_t); + +/* + memalign(size_t alignment, size_t n); + Returns a pointer to a newly allocated chunk of n bytes, aligned + in accord with the alignment argument. + + The alignment argument should be a power of two. If the argument is + not a power of two, the nearest greater power is used. + 8-byte alignment is guaranteed by normal malloc calls, so don't + bother calling memalign with an argument of 8 or less. + + Overreliance on memalign is a sure way to fragment space. +*/ +void* dlmemalign(size_t, size_t); + +/* + int posix_memalign(void** pp, size_t alignment, size_t n); + Allocates a chunk of n bytes, aligned in accord with the alignment + argument. Differs from memalign only in that it (1) assigns the + allocated memory to *pp rather than returning it, (2) fails and + returns EINVAL if the alignment is not a power of two (3) fails and + returns ENOMEM if memory cannot be allocated. +*/ +int dlposix_memalign(void**, size_t, size_t); + +/* + valloc(size_t n); + Equivalent to memalign(pagesize, n), where pagesize is the page + size of the system. If the pagesize is unknown, 4096 is used. +*/ +void* dlvalloc(size_t); + +/* + mallopt(int parameter_number, int parameter_value) + Sets tunable parameters The format is to provide a + (parameter-number, parameter-value) pair. mallopt then sets the + corresponding parameter to the argument value if it can (i.e., so + long as the value is meaningful), and returns 1 if successful else + 0. SVID/XPG/ANSI defines four standard param numbers for mallopt, + normally defined in malloc.h. None of these are use in this malloc, + so setting them has no effect. But this malloc also supports other + options in mallopt: + + Symbol param # default allowed param values + M_TRIM_THRESHOLD -1 2*1024*1024 any (-1U disables trimming) + M_GRANULARITY -2 page size any power of 2 >= page size + M_MMAP_THRESHOLD -3 256*1024 any (or 0 if no MMAP support) +*/ +int dlmallopt(int, int); + +#define M_TRIM_THRESHOLD (-1) +#define M_GRANULARITY (-2) +#define M_MMAP_THRESHOLD (-3) + + +/* + malloc_footprint(); + Returns the number of bytes obtained from the system. The total + number of bytes allocated by malloc, realloc etc., is less than this + value. Unlike mallinfo, this function returns only a precomputed + result, so can be called frequently to monitor memory consumption. + Even if locks are otherwise defined, this function does not use them, + so results might not be up to date. +*/ +size_t dlmalloc_footprint(void); + +/* + malloc_max_footprint(); + Returns the maximum number of bytes obtained from the system. This + value will be greater than current footprint if deallocated space + has been reclaimed by the system. The peak number of bytes allocated + by malloc, realloc etc., is less than this value. Unlike mallinfo, + this function returns only a precomputed result, so can be called + frequently to monitor memory consumption. Even if locks are + otherwise defined, this function does not use them, so results might + not be up to date. +*/ +size_t dlmalloc_max_footprint(void); + +/* + malloc_footprint_limit(); + Returns the number of bytes that the heap is allowed to obtain from + the system, returning the last value returned by + malloc_set_footprint_limit, or the maximum size_t value if + never set. The returned value reflects a permission. There is no + guarantee that this number of bytes can actually be obtained from + the system. +*/ +size_t dlmalloc_footprint_limit(void); + +/* + malloc_set_footprint_limit(); + Sets the maximum number of bytes to obtain from the system, causing + failure returns from malloc and related functions upon attempts to + exceed this value. The argument value may be subject to page + rounding to an enforceable limit; this actual value is returned. + Using an argument of the maximum possible size_t effectively + disables checks. If the argument is less than or equal to the + current malloc_footprint, then all future allocations that require + additional system memory will fail. However, invocation cannot + retroactively deallocate existing used memory. +*/ +size_t dlmalloc_set_footprint_limit(size_t bytes); + +/* + malloc_inspect_all(void(*handler)(void *start, + void *end, + size_t used_bytes, + void* callback_arg), + void* arg); + Traverses the heap and calls the given handler for each managed + region, skipping all bytes that are (or may be) used for bookkeeping + purposes. Traversal does not include include chunks that have been + directly memory mapped. Each reported region begins at the start + address, and continues up to but not including the end address. The + first used_bytes of the region contain allocated data. If + used_bytes is zero, the region is unallocated. The handler is + invoked with the given callback argument. If locks are defined, they + are held during the entire traversal. It is a bad idea to invoke + other malloc functions from within the handler. + + For example, to count the number of in-use chunks with size greater + than 1000, you could write: + static int count = 0; + void count_chunks(void* start, void* end, size_t used, void* arg) { + if (used >= 1000) ++count; + } + then: + malloc_inspect_all(count_chunks, NULL); + + malloc_inspect_all is compiled only if MALLOC_INSPECT_ALL is defined. +*/ +void dlmalloc_inspect_all(void(*handler)(void*, void *, size_t, void*), + void* arg); + +#if !NO_MALLINFO +/* + mallinfo() + Returns (by copy) a struct containing various summary statistics: + + arena: current total non-mmapped bytes allocated from system + ordblks: the number of free chunks + smblks: always zero. + hblks: current number of mmapped regions + hblkhd: total bytes held in mmapped regions + usmblks: the maximum total allocated space. This will be greater + than current total if trimming has occurred. + fsmblks: always zero + uordblks: current total allocated space (normal or mmapped) + fordblks: total free space + keepcost: the maximum number of bytes that could ideally be released + back to system via malloc_trim. ("ideally" means that + it ignores page restrictions etc.) + + Because these fields are ints, but internal bookkeeping may + be kept as longs, the reported values may wrap around zero and + thus be inaccurate. +*/ + +struct mallinfo dlmallinfo(void); +#endif /* NO_MALLINFO */ + +/* + independent_calloc(size_t n_elements, size_t element_size, void* chunks[]); + + independent_calloc is similar to calloc, but instead of returning a + single cleared space, it returns an array of pointers to n_elements + independent elements that can hold contents of size elem_size, each + of which starts out cleared, and can be independently freed, + realloc'ed etc. The elements are guaranteed to be adjacently + allocated (this is not guaranteed to occur with multiple callocs or + mallocs), which may also improve cache locality in some + applications. + + The "chunks" argument is optional (i.e., may be null, which is + probably the most typical usage). If it is null, the returned array + is itself dynamically allocated and should also be freed when it is + no longer needed. Otherwise, the chunks array must be of at least + n_elements in length. It is filled in with the pointers to the + chunks. + + In either case, independent_calloc returns this pointer array, or + null if the allocation failed. If n_elements is zero and "chunks" + is null, it returns a chunk representing an array with zero elements + (which should be freed if not wanted). + + Each element must be freed when it is no longer needed. This can be + done all at once using bulk_free. + + independent_calloc simplifies and speeds up implementations of many + kinds of pools. It may also be useful when constructing large data + structures that initially have a fixed number of fixed-sized nodes, + but the number is not known at compile time, and some of the nodes + may later need to be freed. For example: + + struct Node { int item; struct Node* next; }; + + struct Node* build_list() { + struct Node** pool; + int n = read_number_of_nodes_needed(); + if (n <= 0) return 0; + pool = (struct Node**)(independent_calloc(n, sizeof(struct Node), 0); + if (pool == 0) die(); + // organize into a linked list... + struct Node* first = pool[0]; + for (i = 0; i < n-1; ++i) + pool[i]->next = pool[i+1]; + free(pool); // Can now free the array (or not, if it is needed later) + return first; + } +*/ +void** dlindependent_calloc(size_t, size_t, void**); + +/* + independent_comalloc(size_t n_elements, size_t sizes[], void* chunks[]); + + independent_comalloc allocates, all at once, a set of n_elements + chunks with sizes indicated in the "sizes" array. It returns + an array of pointers to these elements, each of which can be + independently freed, realloc'ed etc. The elements are guaranteed to + be adjacently allocated (this is not guaranteed to occur with + multiple callocs or mallocs), which may also improve cache locality + in some applications. + + The "chunks" argument is optional (i.e., may be null). If it is null + the returned array is itself dynamically allocated and should also + be freed when it is no longer needed. Otherwise, the chunks array + must be of at least n_elements in length. It is filled in with the + pointers to the chunks. + + In either case, independent_comalloc returns this pointer array, or + null if the allocation failed. If n_elements is zero and chunks is + null, it returns a chunk representing an array with zero elements + (which should be freed if not wanted). + + Each element must be freed when it is no longer needed. This can be + done all at once using bulk_free. + + independent_comallac differs from independent_calloc in that each + element may have a different size, and also that it does not + automatically clear elements. + + independent_comalloc can be used to speed up allocation in cases + where several structs or objects must always be allocated at the + same time. For example: + + struct Head { ... } + struct Foot { ... } + + void send_message(char* msg) { + int msglen = strlen(msg); + size_t sizes[3] = { sizeof(struct Head), msglen, sizeof(struct Foot) }; + void* chunks[3]; + if (independent_comalloc(3, sizes, chunks) == 0) + die(); + struct Head* head = (struct Head*)(chunks[0]); + char* body = (char*)(chunks[1]); + struct Foot* foot = (struct Foot*)(chunks[2]); + // ... + } + + In general though, independent_comalloc is worth using only for + larger values of n_elements. For small values, you probably won't + detect enough difference from series of malloc calls to bother. + + Overuse of independent_comalloc can increase overall memory usage, + since it cannot reuse existing noncontiguous small chunks that + might be available for some of the elements. +*/ +void** dlindependent_comalloc(size_t, size_t*, void**); + +/* + bulk_free(void* array[], size_t n_elements) + Frees and clears (sets to null) each non-null pointer in the given + array. This is likely to be faster than freeing them one-by-one. + If footers are used, pointers that have been allocated in different + mspaces are not freed or cleared, and the count of all such pointers + is returned. For large arrays of pointers with poor locality, it + may be worthwhile to sort this array before calling bulk_free. +*/ +size_t dlbulk_free(void**, size_t n_elements); + +/* + pvalloc(size_t n); + Equivalent to valloc(minimum-page-that-holds(n)), that is, + round up n to nearest pagesize. + */ +void* dlpvalloc(size_t); + +/* + malloc_trim(size_t pad); + + If possible, gives memory back to the system (via negative arguments + to sbrk) if there is unused memory at the `high' end of the malloc + pool or in unused MMAP segments. You can call this after freeing + large blocks of memory to potentially reduce the system-level memory + requirements of a program. However, it cannot guarantee to reduce + memory. Under some allocation patterns, some large free blocks of + memory will be locked between two used chunks, so they cannot be + given back to the system. + + The `pad' argument to malloc_trim represents the amount of free + trailing space to leave untrimmed. If this argument is zero, only + the minimum amount of memory to maintain internal data structures + will be left. Non-zero arguments can be supplied to maintain enough + trailing space to service future expected allocations without having + to re-obtain memory from the system. + + Malloc_trim returns 1 if it actually released any memory, else 0. +*/ +int dlmalloc_trim(size_t); + +/* + malloc_stats(); + Prints on stderr the amount of space obtained from the system (both + via sbrk and mmap), the maximum amount (which may be more than + current if malloc_trim and/or munmap got called), and the current + number of bytes allocated via malloc (or realloc, etc) but not yet + freed. Note that this is the number of bytes allocated, not the + number requested. It will be larger than the number requested + because of alignment and bookkeeping overhead. Because it includes + alignment wastage as being in use, this figure may be greater than + zero even when no user-level chunks are allocated. + + The reported current and maximum system memory can be inaccurate if + a program makes other calls to system memory allocation functions + (normally sbrk) outside of malloc. + + malloc_stats prints only the most commonly interesting statistics. + More information can be obtained by calling mallinfo. + + malloc_stats is not compiled if NO_MALLOC_STATS is defined. +*/ +void dlmalloc_stats(void); + +#endif /* !ONLY_MSPACES */ + +/* + malloc_usable_size(void* p); + + Returns the number of bytes you can actually use in + an allocated chunk, which may be more than you requested (although + often not) due to alignment and minimum size constraints. + You can use this many bytes without worrying about + overwriting other allocated objects. This is not a particularly great + programming practice. malloc_usable_size can be more useful in + debugging and assertions, for example: + + p = malloc(n); + assert(malloc_usable_size(p) >= 256); +*/ +size_t dlmalloc_usable_size(const void*); + +#if MSPACES + +/* + mspace is an opaque type representing an independent + region of space that supports mspace_malloc, etc. +*/ +typedef void* mspace; + +/* + create_mspace creates and returns a new independent space with the + given initial capacity, or, if 0, the default granularity size. It + returns null if there is no system memory available to create the + space. If argument locked is non-zero, the space uses a separate + lock to control access. The capacity of the space will grow + dynamically as needed to service mspace_malloc requests. You can + control the sizes of incremental increases of this space by + compiling with a different DEFAULT_GRANULARITY or dynamically + setting with mallopt(M_GRANULARITY, value). +*/ +mspace create_mspace(size_t capacity, int locked); + +/* + destroy_mspace destroys the given space, and attempts to return all + of its memory back to the system, returning the total number of + bytes freed. After destruction, the results of access to all memory + used by the space become undefined. +*/ +size_t destroy_mspace(mspace msp); + +/* + create_mspace_with_base uses the memory supplied as the initial base + of a new mspace. Part (less than 128*sizeof(size_t) bytes) of this + space is used for bookkeeping, so the capacity must be at least this + large. (Otherwise 0 is returned.) When this initial space is + exhausted, additional memory will be obtained from the system. + Destroying this space will deallocate all additionally allocated + space (if possible) but not the initial base. +*/ +mspace create_mspace_with_base(void* base, size_t capacity, int locked); + +/* + mspace_track_large_chunks controls whether requests for large chunks + are allocated in their own untracked mmapped regions, separate from + others in this mspace. By default large chunks are not tracked, + which reduces fragmentation. However, such chunks are not + necessarily released to the system upon destroy_mspace. Enabling + tracking by setting to true may increase fragmentation, but avoids + leakage when relying on destroy_mspace to release all memory + allocated using this space. The function returns the previous + setting. +*/ +int mspace_track_large_chunks(mspace msp, int enable); + +#if !NO_MALLINFO +/* + mspace_mallinfo behaves as mallinfo, but reports properties of + the given space. +*/ +struct mallinfo mspace_mallinfo(mspace msp); +#endif /* NO_MALLINFO */ + +/* + An alias for mallopt. +*/ +int mspace_mallopt(int, int); + +/* + The following operate identically to their malloc counterparts + but operate only for the given mspace argument +*/ +void* mspace_malloc(mspace msp, size_t bytes); +void mspace_free(mspace msp, void* mem); +void* mspace_calloc(mspace msp, size_t n_elements, size_t elem_size); +void* mspace_realloc(mspace msp, void* mem, size_t newsize); +void* mspace_realloc_in_place(mspace msp, void* mem, size_t newsize); +void* mspace_memalign(mspace msp, size_t alignment, size_t bytes); +void** mspace_independent_calloc(mspace msp, size_t n_elements, + size_t elem_size, void* chunks[]); +void** mspace_independent_comalloc(mspace msp, size_t n_elements, + size_t sizes[], void* chunks[]); +size_t mspace_bulk_free(mspace msp, void**, size_t n_elements); +size_t mspace_usable_size(const void* mem); +void mspace_malloc_stats(mspace msp); +int mspace_trim(mspace msp, size_t pad); +size_t mspace_footprint(mspace msp); +size_t mspace_max_footprint(mspace msp); +size_t mspace_footprint_limit(mspace msp); +size_t mspace_set_footprint_limit(mspace msp, size_t bytes); +void mspace_inspect_all(mspace msp, + void(*handler)(void *, void *, size_t, void*), + void* arg); +#endif /* MSPACES */ + +#ifdef __cplusplus +}; /* end of extern "C" */ +#endif + +#endif /* MALLOC_280_H */ + +#endif /* not active yet */ From patchwork Mon Dec 1 17:04:47 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 783 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608897; bh=TBwWFk7QxMvYwC/efFG4WeuSyHRpHZwBEekjjOiF8IA=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=Zek9Xebeelt/Lp9c0uGBbRZl6JCvGqIV/awVQ6db72tYwWnfwtjenfxNc1takW9io fS2jUIW6GW/mmIQzhdXuJIT6SM9tgq/vWAnJxtLuInxkFFE9JXMCbno3iZhY2wLR3s XN7nlYm1bo7EcEmol83RI9AQ6Fy5izM/2grLAsUaftDQet3UAnbHteGXvBOupwwQKh +m68FCxiGYh5FTqEjb6GRPVDF+yMxCBCIr5g7D8GlPyk2EXpau4LUPad0Vt83dSjqD tJ0TR7+D1stYU4X/r8OkPKLaorBtHgQ2wO8j1z9y+uG+6br6EQpFzCs678BghOqNBq eMn5pFbOFB0bw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 6CE135E439 for ; Mon, 1 Dec 2025 10:08:17 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 3WopAtCt79nH for ; Mon, 1 Dec 2025 10:08:17 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608897; bh=TBwWFk7QxMvYwC/efFG4WeuSyHRpHZwBEekjjOiF8IA=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=Zek9Xebeelt/Lp9c0uGBbRZl6JCvGqIV/awVQ6db72tYwWnfwtjenfxNc1takW9io fS2jUIW6GW/mmIQzhdXuJIT6SM9tgq/vWAnJxtLuInxkFFE9JXMCbno3iZhY2wLR3s XN7nlYm1bo7EcEmol83RI9AQ6Fy5izM/2grLAsUaftDQet3UAnbHteGXvBOupwwQKh +m68FCxiGYh5FTqEjb6GRPVDF+yMxCBCIr5g7D8GlPyk2EXpau4LUPad0Vt83dSjqD tJ0TR7+D1stYU4X/r8OkPKLaorBtHgQ2wO8j1z9y+uG+6br6EQpFzCs678BghOqNBq eMn5pFbOFB0bw== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 4D403688B0 for ; Mon, 1 Dec 2025 10:08:17 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608894; bh=2+UhZjqPoEEYa6TXujhqd4JmxPPMWQPQvTmSzPnw87U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=EO+PhtALqNG1s86l+ghDz3gIhcjN8yze9b+wswKs9vpy4at/xRFh5fQ55AkWJznJy a2M99l5YBmdG0XEGAWiJXA5NphYm+Dz9LKrxMb9N5jIZ+n5rgrqpvzpBmjYB59ESzT EbzYBlzkiZNfSFn69qnSh4JISZMWmyVBC/JySlQZFY6dNK48IOmolzCKpNi7QmQtv4 cZnpDl140hfBilZu4rgwjssYVAfKs92zduPr/u1wJ23DPia9016pvb+vNESwzmvSxP oSaR/UxC5/9CYI+By7CPbYBlFLbyXZZ0LGGOT2hQJAVSHyLfLIDmPtlwolxksQECIY IQuXroTdx5SIQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id C88AD5E439; Mon, 1 Dec 2025 10:08:14 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id lnigxQ1fCzNI; Mon, 1 Dec 2025 10:08:14 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608890; bh=G7Q61BAAuuZFo4ZR5uis45AC8r5hag112KmONGY8qi8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=lahvr4DK0oJMmfqZ0wUGJJydb1LiIDVZ96/brxQYz/EAW7j33jkat+Xn7uD/hD8nw tD/ZcIRSJGdVy8Gvfdy7P6DOyw31XlUssbDGXg3IZ+zPy91JBJKvtsBO+ZV4yG3zfN 08hA0xZLBXScbMhyI3aZoCur4i8kloHb0gQNAS0ZTbZ+uN9XIgnAER0IieZIyM5gaI rqJci0ROQ9mVCJpyqc0ubXht35yq5ocJVaN0np5JnNuBukG2nWgBBE0p7JExYP5rh4 a0djQe4Wr1GG4VI+mxyo3q+vZsF3zLNW6HLIeYKx+ZvQof0VlVow8cwuWP3mFgQNQf eX5irJhgdh3+Q== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 5F0846881A; Mon, 1 Dec 2025 10:08:10 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:47 -0700 Message-ID: <20251201170529.3237986-9-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: BGIG6IEII7RQZZRKKNB5KAWNNY2SWNVV X-Message-ID-Hash: BGIG6IEII7RQZZRKKNB5KAWNNY2SWNVV X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 08/37] malloc: Add mem_malloc_init() and sbrk() List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add mem_malloc_init() to initialise the malloc heap and sbrk() for heap extension. These are U-Boot-specific functions that manage the memory pool used by dlmalloc. Co-developed-by: Claude Signed-off-by: Simon Glass --- common/dlmalloc.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++ include/malloc.h | 27 +++++++++++++++++++++++++ 2 files changed, 78 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index ba8ca5bd0ae..2be2262ebeb 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -542,6 +542,18 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP #define DLMALLOC_EXPORT extern #endif +#ifdef __UBOOT__ +#include +#include + +DECLARE_GLOBAL_DATA_PTR; + +ulong mem_malloc_start; +ulong mem_malloc_end; +ulong mem_malloc_brk; + +#endif /* __UBOOT__ */ + #ifndef WIN32 #ifdef _WIN32 #define WIN32 1 @@ -6290,3 +6302,42 @@ History: structure of old version, but most details differ.) */ + +/* --------------------- U-Boot additions --------------------- */ + +#ifdef __UBOOT__ + +void *sbrk(ptrdiff_t increment) +{ + ulong old = mem_malloc_brk; + ulong new = old + increment; + + /* mem_malloc_end points one byte past the end, so >= is correct */ + if ((new < mem_malloc_start) || (new >= mem_malloc_end)) + return (void *)MORECORE_FAILURE; + + /* + * if we are giving memory back make sure we clear it out since + * we set MORECORE_CLEARS to 1 + */ + if (increment < 0) + memset((void *)new, '\0', -increment); + + mem_malloc_brk = new; + + return (void *)old; +} + +void mem_malloc_init(ulong start, ulong size) +{ + mem_malloc_start = (ulong)map_sysmem(start, size); + mem_malloc_end = mem_malloc_start + size; + mem_malloc_brk = mem_malloc_end; + + debug("using memory %#lx-%#lx for malloc()\n", mem_malloc_start, + mem_malloc_end); +#if CONFIG_IS_ENABLED(SYS_MALLOC_CLEAR_ON_INIT) + memset((void *)mem_malloc_start, '\0', size); +#endif +} +#endif /* __UBOOT__ */ diff --git a/include/malloc.h b/include/malloc.h index 4608082d2e0..e0a5b732203 100644 --- a/include/malloc.h +++ b/include/malloc.h @@ -625,6 +625,33 @@ void mspace_inspect_all(mspace msp, void* arg); #endif /* MSPACES */ +/* --------------------- U-Boot additions --------------------- */ + +#ifdef __UBOOT__ +#include + +/* Memory pool boundaries */ +extern ulong mem_malloc_start; +extern ulong mem_malloc_end; +extern ulong mem_malloc_brk; + +/** + * mem_malloc_init() - Initialize the malloc() heap + * + * @start: Start address of heap memory region + * @size: Size of heap memory region in bytes + */ +void mem_malloc_init(ulong start, ulong size); + +/** + * sbrk() - Extend the heap + * + * @increment: Number of bytes to add (or remove if negative) + * Return: Previous break value on success, MORECORE_FAILURE on error + */ +void *sbrk(ptrdiff_t increment); +#endif /* __UBOOT__ */ + #ifdef __cplusplus }; /* end of extern "C" */ #endif From patchwork Mon Dec 1 17:04:48 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 784 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608899; bh=UkJn/vEl5xnKbYeCBuKeKvloFCIvMZd1WwQIqkl6MCM=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=JCfp0VeVJeelTdDJOwbUxarjVFZEqx88E0xpxYOoUsvHQfjvphfIJZ8rKe+Kz2Wzp s4JzCj4kM5m7lL+Wa1ELJEtStyUkvrBK0F84A7WJE+1oiVFiH4iOVuuesqD+woEd1C mkew+wZjHEsTmWsTv7+1tksZgBRB5x5eEdf1nUi4L514omZ815FhKbZmevAQeihyXH 7Uv3cYZTP/TwllvfxFf128Htch6GgcklL/BugdYPwAsFXTj9pucuS4JqYhFrijU8NU N/VG9UiPeriXA5E4TabEXC6Z/OLk7GSN6PKF/LMZiT/sl00Fs5xemiPHbSomGYp4DL 1hBor4TO6xZbA== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 46A0F688CF for ; Mon, 1 Dec 2025 10:08:19 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id yHa7jXljEeQi for ; Mon, 1 Dec 2025 10:08:19 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608899; bh=UkJn/vEl5xnKbYeCBuKeKvloFCIvMZd1WwQIqkl6MCM=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=JCfp0VeVJeelTdDJOwbUxarjVFZEqx88E0xpxYOoUsvHQfjvphfIJZ8rKe+Kz2Wzp s4JzCj4kM5m7lL+Wa1ELJEtStyUkvrBK0F84A7WJE+1oiVFiH4iOVuuesqD+woEd1C mkew+wZjHEsTmWsTv7+1tksZgBRB5x5eEdf1nUi4L514omZ815FhKbZmevAQeihyXH 7Uv3cYZTP/TwllvfxFf128Htch6GgcklL/BugdYPwAsFXTj9pucuS4JqYhFrijU8NU N/VG9UiPeriXA5E4TabEXC6Z/OLk7GSN6PKF/LMZiT/sl00Fs5xemiPHbSomGYp4DL 1hBor4TO6xZbA== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 29462688D2 for ; Mon, 1 Dec 2025 10:08:19 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608895; bh=PPotmIdGx5wy3GVBOIb2IidldP9/OXNqdp/IMU27iU8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pS3jeluyN4Xs4UgP0hR165dqCY4m1K0PXNmX2UFgTcERDsne4iOozoVHOyVoiIca+ wLD7VnSjR+9JK392948Jg347/1DE6RxHMnqgqifFA21jUj7yh49VZ+EhJo31s/djdo KrOINyRk/eNVcIHyyIpqg2N/1XgKp/WeydFMcunfXc/XbDW+41ztfj4ZtRl5Lx5ZNf r6oS+lUunxE1lAK3Itb3/Iu51WMVOtwBAwh1WyOftTFXrIR/RxEP9lfZb00UVJsshN +2beHholPyq/64LpLgKztIlztFEe/OTNum2yp2GwIF/Xof/9N6qjYGWfoPherZ108F jcMRlCXuai8gQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id E5974688A7; Mon, 1 Dec 2025 10:08:15 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id sb5_0GsMBU8i; Mon, 1 Dec 2025 10:08:15 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608895; bh=66bklsFfLpH4VhV0yyBxRurV1U6H2OQMBdH+4uQRMi8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=srjgr43GHOaoOm/QUyvIZbCaABKj9Tk8iie00hwJF3qKCBM+BuCkai0SQ/hfiTWp0 TfTSEGLQfJ44JmmdcpO+HCg2jmph8oMhDeY90s1UsNAnBHaYyD+FZ+iK9pla+EhLVE 7KtMhLIWHYXC0mnqnDnSs6JiGY7kh10Eg9Bp+u4K+IauLBUItgZtAPUz3M1dG+NfXl ak4LgRnnnYJxi/sjEmuF/EvxVWnwtsyZaid0nJnkswirxhH/qcfahVS8rp0WSiM02c u9EJrmIwMkO11I+jFOqlNVv1bDcLy5RWOdQSCZW2a+VnKBGdDdfEgNd+meC2O4kmZG NO93Kc0hmbwvA== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 1E3266881A; Mon, 1 Dec 2025 10:08:15 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:48 -0700 Message-ID: <20251201170529.3237986-10-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: UZCVTAHQD6JMTTSS6Z4KSXXWROVS5ITB X-Message-ID-Hash: UZCVTAHQD6JMTTSS6Z4KSXXWROVS5ITB X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 09/37] malloc: Add U-Boot configuration for dlmalloc 2.8.6 List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add an #ifdef __UBOOT__ section to configure dlmalloc for U-Boot's embedded environment: - Disable mmap, set LACKS_* for unavailable headers - Include string.h and errno.h - Add ABORT definition using infinite loop - Define DEBUG 0 to avoid assert redefinition issues - Fix dlmalloc_footprint_limit() prototype (add void) - Fix dlmalloc_usable_size() to use const void * - Use MFAIL instead of MORECORE_FAILURE in sbrk() Co-developed-by: Claude Signed-off-by: Simon Glass --- common/dlmalloc.c | 30 +++++++++++++++++++++++++----- 1 file changed, 25 insertions(+), 5 deletions(-) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 2be2262ebeb..480dd46c0cf 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -543,6 +543,26 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP #endif #ifdef __UBOOT__ + +#define LACKS_FCNTL_H +#define LACKS_UNISTD_H +#define LACKS_SYS_PARAM_H +#define LACKS_SYS_MMAN_H +#define LACKS_SYS_TYPES_H +#define LACKS_SCHED_H +#define LACKS_TIME_H +#define HAVE_MMAP 0 +#define HAVE_MREMAP 0 +#define MORECORE_CONTIGUOUS 1 +#define MORECORE_CANNOT_TRIM 1 +#define MORECORE_CLEARS 1 +#define NO_MALLOC_STATS 1 +#define USE_LOCKS 0 +#define USE_SPIN_LOCKS 0 +#define MALLOC_FAILURE_ACTION +#define ABORT do {} while (1) + +#include #include #include @@ -1020,7 +1040,7 @@ DLMALLOC_EXPORT size_t dlmalloc_max_footprint(void); guarantee that this number of bytes can actually be obtained from the system. */ -DLMALLOC_EXPORT size_t dlmalloc_footprint_limit(); +DLMALLOC_EXPORT size_t dlmalloc_footprint_limit(void); /* malloc_set_footprint_limit(); @@ -1281,7 +1301,7 @@ DLMALLOC_EXPORT void dlmalloc_stats(void); p = malloc(n); assert(malloc_usable_size(p) >= 256); */ -size_t dlmalloc_usable_size(void*); +size_t dlmalloc_usable_size(const void*); #endif /* ONLY_MSPACES */ @@ -5400,9 +5420,9 @@ int dlmallopt(int param_number, int value) { return change_mparam(param_number, value); } -size_t dlmalloc_usable_size(void* mem) { +size_t dlmalloc_usable_size(const void* mem) { if (mem != 0) { - mchunkptr p = mem2chunk(mem); + mchunkptr p = mem2chunk((void*)mem); if (is_inuse(p)) return chunksize(p) - overhead_for(p); } @@ -6314,7 +6334,7 @@ void *sbrk(ptrdiff_t increment) /* mem_malloc_end points one byte past the end, so >= is correct */ if ((new < mem_malloc_start) || (new >= mem_malloc_end)) - return (void *)MORECORE_FAILURE; + return MFAIL; /* * if we are giving memory back make sure we clear it out since From patchwork Mon Dec 1 17:04:49 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 785 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608907; bh=tWKJc/FxcLqm0/sKC1iGVMu8EPkw9Qb8tPQdwUPrOOk=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=ApbhVBcZBBc5zvXc8DqY1xED97GOJ8GSE7aIaEMyWslvMMUhSpjSj9YY+M4pb4xM5 bEMuwXTEIjrKuDHRT7pZP1PlVKRWZee2MB5lZoBGX5k/D2JJJpAYkdKudJ5eBzJOle gSxMhz+PxCCl5KO0USnCwGV+/PRddyx10B4tSHwoXR43CbYG4YjwH+AXOy6Vx6ljRw Or3zS8C2xlSlyxTUgQQfNr9vsba4Ny0jjMEwLmmUJbdLpi79FFap5U1qBc6DHhSqtg I8CBbDtw2in1+7cJMkrvqEzfE+4YPjQNbNCHZGJu74OA7v2aFVPhABdfWnqMmJA+c/ zzkh9pMF+xbEQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 0FB0F688D2 for ; Mon, 1 Dec 2025 10:08:27 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id BlZu6C3pPX4R for ; Mon, 1 Dec 2025 10:08:27 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608906; bh=tWKJc/FxcLqm0/sKC1iGVMu8EPkw9Qb8tPQdwUPrOOk=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=mfmI2R3dTiEykQf4LD13dzshBVL81X+qwNIz5JdSE8avXx4MzWvszrGfZPZaTg7ZI XOuc7D3Ximw69o1pm7PGZsiG1cSLp9ViHm1B3/884Yqth2Lw3h9O441/4WL2nkndm/ 4d3R6PqzP8WplJ78h66M88Do4jKGP8Suu7JzQhJe5MH9f89/ahzblTLagAF6ijcM/C y0LBHg10l9Rzr0ikcptXtph1kYy3suiD0Qdlrjof0U6xF9/vy/ipMSPhv8w9jNphlQ xzs/IkeFZVEIFQu/rajkfOnu0xSJfw4mR+OXKnEjvmNf7SFU5AlaF9f8eiTAmUhdDm qtfi75W9lwSeQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id E2156688B9 for ; Mon, 1 Dec 2025 10:08:26 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608903; bh=Zx8nKM94/p2EF4Gca5wRdv7L+xnhi2WkaGgZaw2hMXo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=DMnpMOLbR7d8sAAgWzLqClbJTU3NEEp+m8Um3zRHFmENsJt4svXePIf9zcD7fILOF 7ivAJDAVjKZgdELV6CJ7rZ8pUhfCi9yrN+u8F9v58b95YqNYY3QzPf6sgOT1xl7zr8 QyuRO5XLlF/6fs/L3Rqs4TP3C3bgMJ7UFeMrMc7/uj08ZHLNGIuSzGnnAq4iX30eoC ehp/Yhf8ELZU7xZT4nf00/2JGCgCCCDrU2UEf9fwkOBujhSUKjHSPxUFNLMjtV3zFq GR/t3gOmXr96VRlZjD32qNxSrkthExan2UrFaBBO+1SVWJbF0+D1CBRNaHu1kmHkyr VO1A+YDmDJokQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id E9A5368821; Mon, 1 Dec 2025 10:08:23 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id eXdruEvmPX1h; Mon, 1 Dec 2025 10:08:23 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608896; bh=XVAgoahJXgTGnpm0eZiy6XcdEDBurFG5jb1q3A/tOfE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=RKDt5jOOr+oV/mMSlA2ew449r3dt96NDbhWIMjse60dRuRMIKti6YFbI3gBJF7H64 WYJDi3V9JTAfU3ZFRyL5iuzgI7Nu++/d9jn8YhlNPrhkIgSJcuv0CxQ/t8iez3OyAs AXUt7qEQgiJxX0E0/zdCC09e6jDYzWmo5dK5B6Vd8jWpo3pdm8VONlhg2pY7NCQHS4 mlDNGzxeeWAr5y0fNTGR977+0UUwygLjl1QrzeeJPI09Rw4Y8JxAwZzIC3fjTBUioZ VHCF21mmhKYSBsI1eIquPZm6u4qS6Ml332St8bs/IhyCXO3jBVJn9HZ4HatEy91IAZ lk7ZZtbD/WxYQ== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 3769D6881A; Mon, 1 Dec 2025 10:08:16 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:49 -0700 Message-ID: <20251201170529.3237986-11-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: 4SB3XFTIUYTWLOWDBFLLX6WAJNZ7RFVX X-Message-ID-Hash: 4SB3XFTIUYTWLOWDBFLLX6WAJNZ7RFVX X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 10/37] malloc: Fix assert warning List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Include log.h early to prevent an assert() redefinition warning. Co-developed-by: Claude Signed-off-by: Simon Glass --- common/dlmalloc.c | 1 + 1 file changed, 1 insertion(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 480dd46c0cf..ff13a779211 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -562,6 +562,7 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP #define MALLOC_FAILURE_ACTION #define ABORT do {} while (1) +#include #include #include #include From patchwork Mon Dec 1 17:04:50 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 786 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608909; bh=2TL7pDb2C2AZUmWu+Gyckrb8wG35+7gPeSnPkcFvlwE=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=UPj+pF7EbWM5Xtz2vxqUJN2JHaYlfEGWyQK/B2tox2W7zjnjY6zW/+bQDVFGmWj2c xR9hpbA8tIBsAkj9hEcYlxVJb0+7aqmgOD64DLmfIOos0Fz3wOj06ojYNfoi9jlpPH ItflNOSFzAIZbmCSxpELO1zvEKakGvL4g1qyhndfn+bvdXwqIdhOd3Vt9cKY1tSUpn DxPme4lPSl3FTlAGhxqrzbwoeXec43oSAmMpoLfvWoUlh6Z0algyDiKf5KxKjXrJ1C VY+SurJjRzZXEtMnZvq1BMtFxURPf4Qj+Kvf9Jk4mcgBNCI9oBkTmfHLzZvcpLdpYJ iQ5ZxXdRS5frw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 1F81E688B6 for ; Mon, 1 Dec 2025 10:08:29 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id TezkDSJW_BIQ for ; Mon, 1 Dec 2025 10:08:29 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608908; bh=2TL7pDb2C2AZUmWu+Gyckrb8wG35+7gPeSnPkcFvlwE=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=w1LbtTARngoNyj1ryakcWFRv9C9z/rZI7FOa4ogC/LHb6JMf8N0Py1WLBDdiTaHGH Tq8KqL7DZPP2XZq7OwIWXLQ2BpO7J6y4Zhfiaf+JIeW+zbiZDDaqgdriC3WT08sx5W cb40aeQhBkO7NWK/mEH7XbIi20dx691LRqv1ogI/GpzI88tsxfFlv4plVK7zfm1vNW nLhRzs7FPml5GR9a0bp1czbwBsA+FtqrFPjt1qBBWl4RYVAa3lC6cr11ZyugiriKzD LK09xMueieHy2Ox69XpWBiOkddqmXUyXmWI0NPZ4DbxYuFO3NKh6cbjhg0NX4ly6Ku YtDrwZS1/5Wdg== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id E596A688D3 for ; Mon, 1 Dec 2025 10:08:28 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608905; bh=at/cNDf1wKF2uUMEn5XsCgf/0H9z3WctyRW6GvRlM7E=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=t96hbuPc99Ao+ShsG+o1TBLJov2LdN+xTyPVi2X7vyM5lANXngqQFZxFo7M0JDphi hhjrw7L2vdzjl1owoaCzbO0AB50E3v7nmHI0E2zm4ym6TPWNGXkR8D3Zi3HYh8LqvK nLv9e77gdInGEPQ/tr026F05KK+W7pc4rh8GZGcx/9MRSRUuocTs83IjTDmUzry4lN hsI7pQP15smi/0C8i9c9tIG70gyavuLHD6dHIqpSHlI3eHMlEf9pzXuZZxRM7OhMaT 7Y51DILdirA7OxTlPpHtzwAwcXtZfg3uF6YLhsm8JTaSRtdCed/80biLeQtjQKP5KT w7OPMS8uw7/iw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id A25E46881A; Mon, 1 Dec 2025 10:08:25 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id DHTnIJ7lh2H1; Mon, 1 Dec 2025 10:08:25 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608901; bh=wtLUuptUL2FTUzEdOMmh81YFLSpRJAkkZgoP7RQeqFo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=FuwXQOMlX9DuTEU+ralyEf7MI4FADINWmnX8X+7NI8G2HBmog3eHxNIxcTkSlTZSx 5pjK3c9LFmM4Oxs5t6BlEzp5h4o8YaVAl8pTa9tXP/YSSnbu33e9ipoysJFkpLP5N3 bYn3gTPtsdD4CuGySw4WXEvp7K1CXxQ/BvTVcKT+42LCFP1Bf9+tqvrPnU6CVcq2ok Y6F+0Xnc0HZgykyFGJcJbiTntCRP6qgjTOuoYJz9uaijOyh8tVRrg0HilB7oLHyFhT Yg/T2KbHKxg2m+EdM/xSYyzhw85X4o7EWXIyCuDBCetYgVtEStjcfz07JHfMAgA8/I Lx9q+0cHeSs4A== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 2A355688D3; Mon, 1 Dec 2025 10:08:21 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:50 -0700 Message-ID: <20251201170529.3237986-12-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: OKQIBWIODW2QOMQ7EUO7V32UPMS2J5Y7 X-Message-ID-Hash: OKQIBWIODW2QOMQ7EUO7V32UPMS2J5Y7 X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 11/37] malloc: return NULL if not initialized yet List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass When malloc() was called before it was properly initialized (as would happen if when used before relocation to RAM) it returned random, non-NULL values, which called all kinds of difficult to debug subsequent errors. Make sure to return NULL when initialization was not done yet. Signed-off-by: Simon Glass (cherry picked from commit 2740544881f652566756815dda4da0bcd946e9de) --- common/dlmalloc.c | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index ff13a779211..baa9b500e10 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -4582,6 +4582,11 @@ static void* tmalloc_small(mstate m, size_t nb) { #if !ONLY_MSPACES void* dlmalloc(size_t bytes) { +#ifdef __UBOOT__ + /* Return NULL if not initialized yet */ + if (!mem_malloc_start && !mem_malloc_end) + return NULL; +#endif /* Basic algorithm: If a small request (< 256 bytes minus per-chunk overhead): From patchwork Mon Dec 1 17:04:51 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 787 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608915; bh=iN9T5rrGNaBlQ7zerTGl6w5q7oy4eF5CK/XidhRATQs=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=cxiMm+DVafFAz88MqDi8cW7YP7CYM3QjLaWaaDNLWzf/CxnpfNYWdF49e2S4IBX8i mAWTfZD4703A5T3RuEsav4xRBXj5FpCLhQ+hdnnhyaFauB6J2qOiqAP1Cuh56O63nu yOKvVgTUajj3bkH1UYtFniaZRygjXjxUmmvPrOl2U2+J7EZEcZc0pEWfREcK2583/l mMxXogSY7A8zRujbZgrnad7Gz8XQ/VJ5h8P5SJdIT70raWKSbkosoi0RS550xsN+G3 vKlmHYKnElSPxzewKuyj5qPxsKJ+5aizMozK0hqxpco9NDdSzqrt59yyoMJTvykRR+ AplWPkOahDJZQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 85F1B688A4 for ; Mon, 1 Dec 2025 10:08:35 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id gtgADrkX9AfU for ; Mon, 1 Dec 2025 10:08:35 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608915; bh=iN9T5rrGNaBlQ7zerTGl6w5q7oy4eF5CK/XidhRATQs=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=cxiMm+DVafFAz88MqDi8cW7YP7CYM3QjLaWaaDNLWzf/CxnpfNYWdF49e2S4IBX8i mAWTfZD4703A5T3RuEsav4xRBXj5FpCLhQ+hdnnhyaFauB6J2qOiqAP1Cuh56O63nu yOKvVgTUajj3bkH1UYtFniaZRygjXjxUmmvPrOl2U2+J7EZEcZc0pEWfREcK2583/l mMxXogSY7A8zRujbZgrnad7Gz8XQ/VJ5h8P5SJdIT70raWKSbkosoi0RS550xsN+G3 vKlmHYKnElSPxzewKuyj5qPxsKJ+5aizMozK0hqxpco9NDdSzqrt59yyoMJTvykRR+ AplWPkOahDJZQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 5A0DE68821 for ; Mon, 1 Dec 2025 10:08:35 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608911; bh=5P8iqa/NBdhTW14OhjSNufbpR4lLhUXW2MpfBZZdCOA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=uabmzuPbXstQvpdakaL3hz/kBHqjIx5y9HHhzrDujNoOxa6c/sWydoKQlCjYEqm07 A0MVRrdOvDPP1COh9/gyBotLDt9ieZBuu3TlR+AbXZnsnRP0sV1AqtPeVIoh9n8daS nrrMmuTs7KeA6+HrLCvTt+aXrREjvZSWuMtiGbv5Ml86hhd4kH4xPA6PMeaJaD+N95 GtMSyg8jZp5ce6P5KhYOsm1e+LNytrqTyj1kVNLQh0pkuh6aJDhvtj6O/tqchRGrig efXnqjZ0SZV4WswGzR0ELJfGOGvwei5BHwqk+79Ck9gjo+CidynuQwOjarThW+ymA4 q5QEGNw+hz3rw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id E4C92686B9; Mon, 1 Dec 2025 10:08:31 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id PeWQrlim8114; Mon, 1 Dec 2025 10:08:31 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608906; bh=+2QVQw45R1LDugsmz+uWemS3bm4QGhMqL+HfDQdYrMQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cM6OWkDNmEtIO2YRLoUIGAAezbo9WCiTy1sw1WRGCwivKk+It+BLkmTspZsiNYOLC nE9QKmhTrusGTakZPm8jsnlgoOhO1KiFaux+HR+Rn8c+VflW/uMPK7shYpmqmuWlfs ftC9wGF9JZ1NDrkhJiTc3z+Bhx7ySGwMW9GPMQ+yACCzPbO0tov9tDH6v2IaRFfyX9 u6ZQ4VAALvq3jt69LPUCabqy+gfh7kYJDTX9xtVLTJseGhIm4L2rZG0uaKt7I/t5v4 zmFZkiqZvsWIh5sT9e9mKdHR7VmjNS31JHbYlshoW14b+ZKRD+anoUjpiDZtOoX368 j6XPLnKhegq5g== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id EE633688AE; Mon, 1 Dec 2025 10:08:25 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:51 -0700 Message-ID: <20251201170529.3237986-13-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: 6KM6HCUCUQZ7XLGRBRMUKF3U7B6VFWHB X-Message-ID-Hash: 6KM6HCUCUQZ7XLGRBRMUKF3U7B6VFWHB X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 12/37] Add a simple malloc() implementation for pre-relocation List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass If we are to have driver model before relocation we need to support some way of calling memory allocation routines. The standard malloc() is pretty complicated: 1. It uses some BSS memory for its state, and BSS is not available before relocation 2. It supports algorithms for reducing memory fragmentation and improving performace of free(). Before relocation we could happily just not support free(). 3. It includes about 4KB of code (Thumb 2) and 1KB of data. However since this has been loaded anyway this is not really a problem. The simplest way to support pre-relocation malloc() is to reserve an area of memory and allocate it in increasing blocks as needed. This implementation does this. To enable it, you need to define the size of the malloc() pool as described in the README. It will be located above the pre-relocation stack on supported architectures. Note that this implementation is only useful on machines which have some memory available before dram_init() is called - this includes those that do no DRAM init (like tegra) and those that do it in SPL (quite a few boards). Enabling driver model preior to relocation for the rest of the boards is left for a later exercise. Changes from original commit: - Squash in commit 'malloc: Redirect to malloc_simple before relocation' - Modify dlmalloc/dlfree/dlrealloc/dlmemalign (new 2.8.6 names) - Add #ifdef __UBOOT__ wrapper around the checks - Redirect to malloc_simple()/memalign_simple() instead of embedding code - Add declarations for malloc_simple() and memalign_simple() to malloc.h - Move global_data.h include and DECLARE_GLOBAL_DATA_PTR to top - Add proper documentation for the two new functions Signed-off-by: Simon Glass Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit d59476b6446799c21e64147d86483140154c1886) --- common/dlmalloc.c | 38 ++++++++++++++++++++++++++++++++++++++ include/malloc.h | 36 ++++++++++++++++++++++++++++++++++++ 2 files changed, 74 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index baa9b500e10..f0b6db20f5c 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -565,6 +565,7 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP #include #include #include +#include #include DECLARE_GLOBAL_DATA_PTR; @@ -4583,6 +4584,11 @@ static void* tmalloc_small(mstate m, size_t nb) { void* dlmalloc(size_t bytes) { #ifdef __UBOOT__ +#if CONFIG_IS_ENABLED(SYS_MALLOC_F) + if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) + return malloc_simple(bytes); +#endif + /* Return NULL if not initialized yet */ if (!mem_malloc_start && !mem_malloc_end) return NULL; @@ -4725,6 +4731,13 @@ void* dlmalloc(size_t bytes) { /* ---------------------------- free --------------------------- */ void dlfree(void* mem) { +#ifdef __UBOOT__ +#if CONFIG_IS_ENABLED(SYS_MALLOC_F) + /* free() is a no-op - all the memory will be freed on relocation */ + if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) + return; +#endif +#endif /* Consolidate freed chunks with preceeding or succeeding bordering free chunks, if they exist, and then place in a bin. Intermixed @@ -5228,6 +5241,14 @@ static void internal_inspect_all(mstate m, #if !ONLY_MSPACES void* dlrealloc(void* oldmem, size_t bytes) { +#ifdef __UBOOT__ +#if CONFIG_IS_ENABLED(SYS_MALLOC_F) + if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) { + /* This is harder to support and should not be needed */ + panic("pre-reloc realloc() is not supported"); + } +#endif +#endif void* mem = 0; if (oldmem == 0) { mem = dlmalloc(bytes); @@ -5304,6 +5325,12 @@ void* dlrealloc_in_place(void* oldmem, size_t bytes) { } void* dlmemalign(size_t alignment, size_t bytes) { +#ifdef __UBOOT__ +#if CONFIG_IS_ENABLED(SYS_MALLOC_F) + if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) + return memalign_simple(alignment, bytes); +#endif +#endif if (alignment <= MALLOC_ALIGNMENT) { return dlmalloc(bytes); } @@ -6366,4 +6393,15 @@ void mem_malloc_init(ulong start, ulong size) memset((void *)mem_malloc_start, '\0', size); #endif } + +int initf_malloc(void) +{ +#if CONFIG_IS_ENABLED(SYS_MALLOC_F) + assert(gd->malloc_base); /* Set up by crt0.S */ + gd->malloc_limit = CONFIG_VAL(SYS_MALLOC_F_LEN); + gd->malloc_ptr = 0; +#endif + + return 0; +} #endif /* __UBOOT__ */ diff --git a/include/malloc.h b/include/malloc.h index e0a5b732203..d5cccc96e50 100644 --- a/include/malloc.h +++ b/include/malloc.h @@ -650,6 +650,42 @@ void mem_malloc_init(ulong start, ulong size); * Return: Previous break value on success, MORECORE_FAILURE on error */ void *sbrk(ptrdiff_t increment); + +/** + * malloc_simple() - Allocate memory from the simple malloc pool + * + * Allocates memory from a simple pool used before full malloc() is available. + * This is used before relocation when BSS is not yet available for dlmalloc's + * state. Memory allocated with this function cannot be freed. + * + * @size: Number of bytes to allocate + * Return: Pointer to allocated memory, or NULL if pool is exhausted + */ +void *malloc_simple(size_t size); + +/** + * memalign_simple() - Allocate aligned memory from the simple malloc pool + * + * Allocates aligned memory from a simple pool used before full malloc() is + * available. This is used before relocation when BSS is not yet available + * for dlmalloc's state. Memory allocated with this function cannot be freed. + * + * @alignment: Required alignment (must be a power of 2) + * @bytes: Number of bytes to allocate + * Return: Pointer to allocated memory, or NULL if pool is exhausted + */ +void *memalign_simple(size_t alignment, size_t bytes); + +/** + * initf_malloc() - Set up the early malloc() pool + * + * Sets up the simple malloc() pool which is used before full malloc() + * is available after relocation. + * + * Return: 0 (always succeeds) + */ +int initf_malloc(void); + #endif /* __UBOOT__ */ #ifdef __cplusplus From patchwork Mon Dec 1 17:04:52 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 788 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608919; bh=QcWbDXUM/y8dNT/F9tgnRPe8XntKuPyBSpJmfGDOXCI=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=m+NPlXB2ntO9AHGX3oDrcGCz3JSx2jx7znz2GDOahbN8n9JgEeS3V/r3l6FTxP1rs XUG9fGFmj728m+cTWNCcs1O7zPrlMmYLwnYtr5o581q/zOo+0Ey3Ftjs64YXXv9VyX naMN0Z7r6ACXbRJHuX0fQfm2YnCS9/xnpR4hRgw2j9u1NFcz9dZ/UB5N9kkcX50IjK n+f/cE6nPPKOLS3Gk9jmvvBZiBIBZx3ljOt0gct1oN/sC2nQayTxz0q21iZeeopCi3 Bp5HIClJDMdG8tfh4kxUbl0I54PL37Knw7kZsh4fCmArVQgPLdR80llzIXp0I2A4gw zolGufP//NZ6Q== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 3DC56688A4 for ; Mon, 1 Dec 2025 10:08:39 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id zoAu4Pmsh-nY for ; Mon, 1 Dec 2025 10:08:39 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608919; bh=QcWbDXUM/y8dNT/F9tgnRPe8XntKuPyBSpJmfGDOXCI=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=m+NPlXB2ntO9AHGX3oDrcGCz3JSx2jx7znz2GDOahbN8n9JgEeS3V/r3l6FTxP1rs XUG9fGFmj728m+cTWNCcs1O7zPrlMmYLwnYtr5o581q/zOo+0Ey3Ftjs64YXXv9VyX naMN0Z7r6ACXbRJHuX0fQfm2YnCS9/xnpR4hRgw2j9u1NFcz9dZ/UB5N9kkcX50IjK n+f/cE6nPPKOLS3Gk9jmvvBZiBIBZx3ljOt0gct1oN/sC2nQayTxz0q21iZeeopCi3 Bp5HIClJDMdG8tfh4kxUbl0I54PL37Knw7kZsh4fCmArVQgPLdR80llzIXp0I2A4gw zolGufP//NZ6Q== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 0B96E688A7 for ; Mon, 1 Dec 2025 10:08:39 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608915; bh=0yvIotBPC5NJ98TmyBQrWsC23ift9FDLAb8OBvg/c/o=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LseQm4u+dcWDZ2JVEjiMjyd7VvTS4oJuZwqF1bAkTcI31WF/G3zS3SddhwBLsTZ8Y axYElnAi6JBKhHWxgTCrQjiXzz4eeqml4F3/6B4C8ztrVSowKhmnCnFJlZ68KuwGad K8wmxrTUFDygoP896T0bZWdBSJtEeNfEQ1CoqmZI7VADXVv+kzY1JbAd78RixnmOd8 eqM38tu6KHIBMmftcqVfm3ZIxrVUEYe0WZKxxqPn+5dr+0ceWctc93cDX7nuUS6LM8 sPcaMBVf4ImJxnpzJentSR1GR5THydT5+/7D0rkf5iliTGgq9P3FemvBgGr4ds1c8I G1o9P1gYw7Lpg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id D389968821; Mon, 1 Dec 2025 10:08:35 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id bh2XkZLi4bJn; Mon, 1 Dec 2025 10:08:35 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608911; bh=DUbbS/sdQAnhx1PO2mLmQxVoz79lX0p/wk2LJpNFc3Q=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=W2zjaP0TTO1HpYJ5anSbvtmhleLOzk1vCHYJ8jykWPhM8MOMOwVEUu53rqVtNqJPJ TKN3XjvHi7QfAkvR5RZlqAsnZlX5+QM1b8hbnN3FRftFnqqa/sZWhWo2AIMrKMDJuc b/Soct380LTPwni+zktt9hZ8ouWOulB1imJvUzIHeUJ0dfOx+xvTFN1H8jGGmhwl3Y 5RjCdYsPz0sagKot8QGgQhpF1L8/s4HFTd+sr7dTKwY2odl0QdJnyqKlyYJVTs7o41 af3Mpwcz8SgD5UXmDtjAjCgYlekYaAftAJCOvW2UD5WlIoSFhDBE4lzc6+ODE1Eaxh JJ9igzf87gWEw== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id EEA3A6881A; Mon, 1 Dec 2025 10:08:30 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:52 -0700 Message-ID: <20251201170529.3237986-14-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: 6UA4TUVFBB5M7PM7X222XYJBQG4GR3EW X-Message-ID-Hash: 6UA4TUVFBB5M7PM7X222XYJBQG4GR3EW X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Sean Anderson , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 13/37] malloc: Enable assertions if UNIT_TEST is enabled List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass dlmalloc has some sanity checks it performs on free() which can help detect memory corruption. However, they are only enabled if DEBUG is defined before including common.h. Define DEBUG earlier if UNIT_TEST is enabled so that assertions are enabled in sandbox. Changes from original commit(s): - Combine commits 213adf6dffe and 1786861415f - Use 'DEBUG 1' instead of 'DEBUG' since new dlmalloc uses '#if DEBUG' Signed-off-by: Sean Anderson Reviewed-by: Simon Glass Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit 6d7601e74437f3c915667a829ab722ba5174ec72) (cherry picked from commit 1786861415f4494a38630584a8fbc9c939a024ce) --- common/dlmalloc.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index f0b6db20f5c..98de6523758 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -544,6 +544,10 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP #ifdef __UBOOT__ +#if CONFIG_IS_ENABLED(UNIT_TEST) +#define DEBUG 1 +#endif + #define LACKS_FCNTL_H #define LACKS_UNISTD_H #define LACKS_SYS_PARAM_H From patchwork Mon Dec 1 17:04:53 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 789 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608924; bh=6k1XkZyv5NEFwFW+BKM9SvyijhZZ6wzF653Qh4VH7so=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=Rsv5Woe9PihVhkG3SnYK9wzkj4ExF3nWYUIQeWkXLN9lUZH5QZdPJFoorrWg5ywTc AXdho08Z8c7LQuMgSADogpqD35L0rP0tSG7QhPEjkgGFuqfEto1AvfZcMBeLFZci0n MBYNKSCy70Q6GFkNi/JutsQ3/zDw5bILD+2lE75chvKKjAZ4DxiykhSClgoSTQjKQ/ g3hLMnYGKmvzvAcLG1wNWToJ6ZPG8x2tGyLGmA7581jbxUU1QSSdfuiYxPqBGttiop zAqfYUQNnkui30QQPcb5QnmIdNuEwPqG7XE4sEhCS5TOje8XkvEFBT2oXaT3Zz8Lbt b4R/8pSZpxJoQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 4954E6888B for ; Mon, 1 Dec 2025 10:08:44 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id Rj9mp9QZC6tH for ; Mon, 1 Dec 2025 10:08:44 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608924; bh=6k1XkZyv5NEFwFW+BKM9SvyijhZZ6wzF653Qh4VH7so=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=Rsv5Woe9PihVhkG3SnYK9wzkj4ExF3nWYUIQeWkXLN9lUZH5QZdPJFoorrWg5ywTc AXdho08Z8c7LQuMgSADogpqD35L0rP0tSG7QhPEjkgGFuqfEto1AvfZcMBeLFZci0n MBYNKSCy70Q6GFkNi/JutsQ3/zDw5bILD+2lE75chvKKjAZ4DxiykhSClgoSTQjKQ/ g3hLMnYGKmvzvAcLG1wNWToJ6ZPG8x2tGyLGmA7581jbxUU1QSSdfuiYxPqBGttiop zAqfYUQNnkui30QQPcb5QnmIdNuEwPqG7XE4sEhCS5TOje8XkvEFBT2oXaT3Zz8Lbt b4R/8pSZpxJoQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 38167686B9 for ; Mon, 1 Dec 2025 10:08:44 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608922; bh=GZytDSegE5lTH0jvgKyrwxM5AQU1DxJX8Tn5Uvbs93g=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NtCNHMrDODwfWcjgOJ0vbuDmE6luljVClrORmBi6gAFNHvaYdxfKA7iNALsquwVjh llQv4z7ht3nFDGu5/BnRC+6IlQcLiAUW9Zj65Ghm240NEEBV2FLX2KFhAWMdplAOkB MVgfPjEKZinGwOJz9P0QSkmu7G3dQJ59aG0e0obhsTpcWZ9/WhQkFZ9R/UhPM6DNrm jMeyZVkt4r90H3Jx9KKf2DqooEuY8UQt3ediLVdYFcZUzJxIM7kEBlU8PzyJmMCoFF /VT/+5nQfRKMIKp/vdrt6x//OaVRXyWALwE6u3qkU8lkk5AaS5S2LCiHUxu4bHlKQ9 3YfG6MnZbqlMw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 2A0F4688AE; Mon, 1 Dec 2025 10:08:42 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id lJm1L7-TkmU2; Mon, 1 Dec 2025 10:08:42 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608916; bh=hKx5AHpkial9nmvBQGAWRd/cuVdJZQ6rFDd+wq+rHZ0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=c4bru8lV4ujxDu1+XdKyK1MRQCgjWUrFraH3/YiFCuaFlGiDnBMOw/OmnIaGEr0HK ntNEoU8kyW9vUeTK92D+MHdzkxCk1iuk8y3gVfG4ZyiDsJaqhvajOEpvOztjaj6nQ8 v4xVv6C0yA2B9MphNN4VCMFE1NKz2i9GAb4LHnd8CvO+BxO+VW0TyjLxeEDpwEMGLP RJUqWhqonlOxPo7xV8RpMOSaZoyWUQDUmQsbw2J/TNGPGlxX+521c4NMWmwhjxjQ+U A6czUBrf9JJKKoPcw2YiJqaKXfNPNcQw4os+8/pkd8yG6YSvfDbe6yEkeovJqIk6Jj 0Fjb5LnTNJ6EA== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 2FC5A688B6; Mon, 1 Dec 2025 10:08:36 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:53 -0700 Message-ID: <20251201170529.3237986-15-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: AS764J6YOBD2KBMSCQJQWVBELWH3UDUV X-Message-ID-Hash: AS764J6YOBD2KBMSCQJQWVBELWH3UDUV X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 14/37] malloc: Reduce code size with INSECURE and NO_MALLINFO List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Enable INSECURE=1 to skip runtime heap validation checks (except for sandbox), and NO_MALLINFO=1 to remove mallinfo support (except when unit tests are enabled). These reduce code size significantly. Co-developed-by: Claude Signed-off-by: Simon Glass --- common/dlmalloc.c | 8 ++++++++ 1 file changed, 8 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 98de6523758..268d3fea52a 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -563,6 +563,14 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP #define NO_MALLOC_STATS 1 #define USE_LOCKS 0 #define USE_SPIN_LOCKS 0 + +#if !CONFIG_IS_ENABLED(UNIT_TEST) +#define NO_MALLINFO 1 +#endif +#if !CONFIG_IS_ENABLED(SANDBOX) +#define INSECURE 1 +#endif + #define MALLOC_FAILURE_ACTION #define ABORT do {} while (1) From patchwork Mon Dec 1 17:04:54 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 790 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608928; bh=wF/bpcc5ZeRJtVrZspsZtNd/KSoN1TKJDDf5FPcsmcs=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=K+vLQ7l3eAlLrTaRcQOdiBZ4UQi9Z+Jhp/e1i66x8wF6YLjtLU0aPqm3bBeZbRvwW hcBFXL5f6EluSf1TQko2cjG2VUjDHVe8UbrUnAIi4K32AUo2rF3alosaAqi5M6mxlD WrfLv6L1HzXG6ELtScOJgsFYk4V1H4TNvH8Def7Tgc326fHA4vffFC0PP5ixkLakZ1 Nq6sYa3cU/9M3dMfacAsjSyHNnPhPoJZMR0/KoKWpvMJVxjGPc5t0dqWU13MzCaNO8 ahZoj33P869f2GDbCUbYk2Gw6KT2D8PpLdjUOVs+9GuF4ESfgFVSbCMZspnlgIRM0d Xp/GCl6E8wEmw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id D210668821 for ; Mon, 1 Dec 2025 10:08:48 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id tBgIUyf7Zoo4 for ; Mon, 1 Dec 2025 10:08:48 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608928; bh=wF/bpcc5ZeRJtVrZspsZtNd/KSoN1TKJDDf5FPcsmcs=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=K+vLQ7l3eAlLrTaRcQOdiBZ4UQi9Z+Jhp/e1i66x8wF6YLjtLU0aPqm3bBeZbRvwW hcBFXL5f6EluSf1TQko2cjG2VUjDHVe8UbrUnAIi4K32AUo2rF3alosaAqi5M6mxlD WrfLv6L1HzXG6ELtScOJgsFYk4V1H4TNvH8Def7Tgc326fHA4vffFC0PP5ixkLakZ1 Nq6sYa3cU/9M3dMfacAsjSyHNnPhPoJZMR0/KoKWpvMJVxjGPc5t0dqWU13MzCaNO8 ahZoj33P869f2GDbCUbYk2Gw6KT2D8PpLdjUOVs+9GuF4ESfgFVSbCMZspnlgIRM0d Xp/GCl6E8wEmw== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id C0C95688AE for ; Mon, 1 Dec 2025 10:08:48 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608926; bh=bF28b1KaQY7EmuDEf/nx84X8UlmQuXrjv1BPETC3MtU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qsU02jj6gESTqRRnd2DWXgtHX2avVi897h6mR9h2UPyPgIid5o36mQp114VTPFhCN 7Rh+z9qt+6gRs9o3oRlZ29Q5tRxKvD2bF29dCQFPz++LmGtV7eK5CDz269snLk0xAj xWkB5eJilH+AwCRqzLU6dQAD9buja6tQRzPWg8WG7PlQC6/nM5Drm3nCrLWyAFG/Xx yiHjWp3Hs3iCYmfdab8ExWXckppWldhY3s/m+K45qq5ccgTIYcBnDhPDFZ9yQUwhdk 6bRCnMWw38RiDZd2bi3odGIczWJ2FyNJNJhYmUnFzYoNMhKxMlU6ZOyGOtzM2mD1ZO fV+IRRgU47btg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id A7A44686B9; Mon, 1 Dec 2025 10:08:46 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id q0IuYLAPrbF6; Mon, 1 Dec 2025 10:08:46 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608921; bh=rAmFs8IBf5jFCoVNxtIfkaO3oe0Jge4ysixPryiGxmw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Un6zp4jo4k6iaYw/lCX1DuZT6RlyUUaIqPupovtFXNM/iztKEWQSyfEBTmwFNuTac QyD82lOulT5GdU+vkI5nsM6s1/+rSd/zgcT9iasCBN043GuE4oRani65dkXNzq84l/ 2qFH03dY0e69FE50hWf4tm8PIdBw3tEVbhHpGdv0pUmBuBI+wV5Wi2DpNFbasCOM29 cEAmLEvEve0zZUbKFjFPS6JS1b+5iWiPl/tbDy3hdlTaN3P4ouqEnveJXOTwu7Kud9 0UGd1JHZEJFnLwaXtBrMTdWYvsu/bqMXpYZ2SJ7xUHkl2u2iE2a/rUMIIRJgYzXky+ QqbiRV3hnZPGA== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id E8301688A4; Mon, 1 Dec 2025 10:08:40 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:54 -0700 Message-ID: <20251201170529.3237986-16-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: ZKGKS2ROJ4MUUFL2XNGA3VTSRJV5QSOS X-Message-ID-Hash: ZKGKS2ROJ4MUUFL2XNGA3VTSRJV5QSOS X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Stephen Warren , Tom Rini , Lukasz Majewski , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 15/37] malloc: work around some memalign fragmentation issues List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Use of memalign can trigger fragmentation issues where the over-sized allocation needed to guarantee alignment fails, even though the exact user-requested size would succeed and be properly aligned. If the padded allocation fails, try allocating exactly the user's requested size. If that happens to be aligned, return it. Otherwise, try a third allocation with just enough extra space to achieve alignment. Changes from original commits: - Port to dlmalloc 2.8.6 internal_memalign() instead of mEMALIGn() - Use internal_malloc/internal_free instead of mALLOc/fREe Signed-off-by: Stephen Warren Reviewed-by: Tom Rini Acked-by: Lukasz Majewski Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from 4f144a416469c6a29127b0656523ae628ea7cbaf) --- common/dlmalloc.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 268d3fea52a..5a8e463671c 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -4975,6 +4975,46 @@ static void* internal_memalign(mstate m, size_t alignment, size_t bytes) { size_t nb = request2size(bytes); size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD; mem = internal_malloc(m, req); +#ifdef __UBOOT__ + /* + * The attempt to over-allocate (with a size large enough to guarantee the + * ability to find an aligned region within allocated memory) failed. + * + * Try again, this time only allocating exactly the size the user wants. + * If the allocation now succeeds and just happens to be aligned, we can + * still fulfill the user's request. + */ + if (mem == 0) { + size_t extra, extra2; + + mem = internal_malloc(m, bytes); + /* Aligned -> use it */ + if (mem != 0 && (((size_t)(mem)) & (alignment - 1)) == 0) + return mem; + /* + * Otherwise, try again, requesting enough extra space to be able to + * acquire alignment. + */ + if (mem != 0) { + internal_free(m, mem); + /* Add in extra bytes to match misalignment of unexpanded alloc */ + extra = alignment - (((size_t)(mem)) % alignment); + mem = internal_malloc(m, bytes + extra); + /* + * mem might not be the same as before. Validate that the previous + * value of extra still works for the current value of mem. + */ + if (mem != 0) { + extra2 = alignment - (((size_t)(mem)) % alignment); + if (extra2 > extra) { + internal_free(m, mem); + mem = 0; + } + } + } + /* Fall through to original NULL check and chunk splitting logic */ + } +#endif if (mem != 0) { mchunkptr p = mem2chunk(mem); if (PREACTION(m)) From patchwork Mon Dec 1 17:04:55 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 791 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608929; bh=ZOZiRnfIZ0HStV73r5IflpYj0ocZPcRn4hLqrVTdhSY=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=QWzR+QcSKU/ymo3P1Nw5zp0+fs99lir7zITVl0es3sbbHqoY0NO6EJthzqbhgadVd 8BkqvrIx+TBJ/dlRMtqFufJ1CvKNtkclRgWBsRETh/Dp7KMIpTMu97Mk8ogx7GJI4L QJu7D4ZvWlaOBKYLYdR7U/BV6XQaxJSujXA+yAmZgYhy9QTwXA3xjfi12lfGyTaw9N ZjlfBVkMxzC14Y1jJxCG/QBC+WR0ReX6VadyenL4xd93KOETr5HPkGuNgS54DuN2SO XAX7SfMavesYRinLa8gZWOnRLJRhNWqW1hm4k9Uc98nElFRFePJx4TVBe1gbGGg0+c Iw/CLK7tZMQ6A== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 556E7688D6 for ; Mon, 1 Dec 2025 10:08:49 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id PN3R3rNzcBkd for ; Mon, 1 Dec 2025 10:08:49 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608929; bh=ZOZiRnfIZ0HStV73r5IflpYj0ocZPcRn4hLqrVTdhSY=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=QWzR+QcSKU/ymo3P1Nw5zp0+fs99lir7zITVl0es3sbbHqoY0NO6EJthzqbhgadVd 8BkqvrIx+TBJ/dlRMtqFufJ1CvKNtkclRgWBsRETh/Dp7KMIpTMu97Mk8ogx7GJI4L QJu7D4ZvWlaOBKYLYdR7U/BV6XQaxJSujXA+yAmZgYhy9QTwXA3xjfi12lfGyTaw9N ZjlfBVkMxzC14Y1jJxCG/QBC+WR0ReX6VadyenL4xd93KOETr5HPkGuNgS54DuN2SO XAX7SfMavesYRinLa8gZWOnRLJRhNWqW1hm4k9Uc98nElFRFePJx4TVBe1gbGGg0+c Iw/CLK7tZMQ6A== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 44B91688A7 for ; Mon, 1 Dec 2025 10:08:49 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608926; bh=+b1qn01nwRDVTwmubso/nefKcf2f7KfyytDYZH1hnBc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vEbR3YI1JAAEPer43u4qiyMxhhDnA05ATcwYd5AKH0BYcWAhx4W+4o03H4GF6apBM SMIkxyw7/DhXtpwG5x3hF0uAbOo10qOdudad7v2iNX7CixMvUku2/jVrwF8jWgpglZ A83WiavZBdS/1EViuXqBZtVo3SUkqITaaYMEMmim6Yhdhyl72q/wpVZDp4QnlYqjx6 +one25G31EDmlcn4KXawnNCUoRLitFE8dRXJVHV7AMjCbMs03frnsz0m5DDqMhL0cu YrItcBFyrjj1mYEuSyvMRbrXHpzmhDvw5fU1nEAVRWYIaeMm012khjyNkN1em3aUeK PJI/g7lUBo+Qw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id A886C6888B; Mon, 1 Dec 2025 10:08:46 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id HnGGme_aJqCv; Mon, 1 Dec 2025 10:08:46 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608926; bh=7pdNk/YM+ShtGkhu7pabouR14uNJ7EUtopiAzlTKpjk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=e0juvUcL3+KT2L5WmTLRk7BpDjtQO5G3tNipI3uxq3/heov8EVKDCIf+o9SS0Mfsg oYPKoCpoGiLoQRV2K4UL9fkMIBU/Z8gL5BgSnr9aEgf08OIXqScj5h599xZZrNMGWT uDpFN7m3pZo55AufwLZcmRbaon3LsdEhW5sQzMuDUjs9J50u/h4j0xVW4lC3yVRoTA vX7tynRVHeOgwdJqL8LEXmA5BVo/gbwaal7nr1zEM4PQho37GNi4fvTJ1bnhJKWey6 HHh4uurpXpu/GBiAafJZVjKDwGBTeJ8QIx1NsvKjX4i3Z90UJOUamQHzlgMI0O/MF6 gmwH4TXNdlSNA== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id CB1925E439; Mon, 1 Dec 2025 10:08:45 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:55 -0700 Message-ID: <20251201170529.3237986-17-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: I7EGZ3IO4SBZVC7MTF5E6YQW6IN352Y3 X-Message-ID-Hash: I7EGZ3IO4SBZVC7MTF5E6YQW6IN352Y3 X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Rabin Vincent , Stephen Warren , Tom Rini , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 16/37] malloc: use hidden visibility List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass When running sandbox, U-Boot's malloc symbols can be hooked into the GOT before U-Boot code runs. This causes issues because the dynamic linker may call malloc/free before gd is initialized. Use hidden visibility for malloc symbols to prevent them from being hooked into the GOT, so only code in the U-Boot binary itself calls them; any other code calls the standard C library malloc(). Changes from original commit: - Use DLMALLOC_EXPORT mechanism instead of #pragma in malloc.h Cc: Rabin Vincent Signed-off-by: Stephen Warren Reviewed-by: Tom Rini Reviewed-by: Simon Glass Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit 2f0bcd4de1a5b990e58d12cd0c7f9d7e9248fec4) --- common/dlmalloc.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 5a8e463671c..d53cbf2f2e1 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -538,6 +538,15 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP #define DLMALLOC_VERSION 20806 #endif /* DLMALLOC_VERSION */ +/* + * For U-Boot, use hidden visibility to prevent malloc symbols from being + * hooked into the GOT, avoiding issues during early initialization before + * gd is set up. + */ +#ifdef __UBOOT__ +#define DLMALLOC_EXPORT extern __attribute__((visibility("hidden"))) +#endif + #ifndef DLMALLOC_EXPORT #define DLMALLOC_EXPORT extern #endif From patchwork Mon Dec 1 17:04:56 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 792 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608934; bh=O9bhpdC+0rP1LcX/r9O07nY3DWyQxD1kBwKIlk/IFrw=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=v0ajsZ0/AY1JyUzezR/mSvSIZGz+v3SPvsJvz8fuolyPPrNwD3eAgN7E4uAIgIjav Ch6Q060PgiVqdL+cKC0TstMuJGXtFYcl+u2jnA+mdoQXjXW7TF09XZghygcsNeh0Mw tF4jKoCmmccgk/o5opCjhaExrzo8o3F0hWjDgpxtpT6O1COUo5EZtbBHzq5e4VGX7c wWrRySgItmWml9pB9tvBhY4omC1ca7G6vIjIHITqyOh5hilB4PL3rVp8c7CTv5KTVV w8z+dQ3T8rAHqlc1jCA9BQ1TCCfjXilp7sbbvpnV1Z58Yo2LgBZEs9O6E7TEohfew6 MQ/N1WjUvEUKA== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 23464688C5 for ; Mon, 1 Dec 2025 10:08:54 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id sqEA8AmFCnS4 for ; Mon, 1 Dec 2025 10:08:54 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608934; bh=O9bhpdC+0rP1LcX/r9O07nY3DWyQxD1kBwKIlk/IFrw=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=v0ajsZ0/AY1JyUzezR/mSvSIZGz+v3SPvsJvz8fuolyPPrNwD3eAgN7E4uAIgIjav Ch6Q060PgiVqdL+cKC0TstMuJGXtFYcl+u2jnA+mdoQXjXW7TF09XZghygcsNeh0Mw tF4jKoCmmccgk/o5opCjhaExrzo8o3F0hWjDgpxtpT6O1COUo5EZtbBHzq5e4VGX7c wWrRySgItmWml9pB9tvBhY4omC1ca7G6vIjIHITqyOh5hilB4PL3rVp8c7CTv5KTVV w8z+dQ3T8rAHqlc1jCA9BQ1TCCfjXilp7sbbvpnV1Z58Yo2LgBZEs9O6E7TEohfew6 MQ/N1WjUvEUKA== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 102F16881A for ; Mon, 1 Dec 2025 10:08:54 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608931; bh=FwTIHEFcel06/stTUN6mcpEOpbWAPgabWJQH7wF1ExI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=AHDm22UwSMCxefnjDpJXKhniqqaUyVjUOEdZEKWWbd9EwftTM8cHL685Z0NNXxGs8 aoYymaKcRVrPFA9UvfJ/6w5DCaBvVJs+P9ZLWPlqA+hDw16bPQlrEIryn2OwrGhzjM v9DHgG+6RaFg1BW9XCvgL8y7K7OBE4QH99+c6ekxGbJdndWHzKSnjA6bIh73hRauZm 1a6oHdT30gxFLQCbFrOCPA7GtxGph7sFarRRtC7g96j4GRFYdRUS2QgVpC0IkTL5cK xvhVh0qMffimJ9XK1sBUjamyCpZdeUqwHBnRJSIlpNTuU5FSN8j2DKp/PU6UEtPI5S ystUXly0XLa5Q== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id B73F4688A7; Mon, 1 Dec 2025 10:08:51 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id VB7qpjBxWstI; Mon, 1 Dec 2025 10:08:51 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608927; bh=U/u+kuqKFdn5IcQy9guvW75SiuDiZJDCJ1E2qVWi6/k=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kMDZI60OMx7mPmR1O5virybsjh4j2vLlq8nbZUcvqbglSO7UlBNtbPRsJcEhMvJWw 6UWqPjG8J2aoe4+C8WaFeoEeNLgW8NcWdFzKhCgHok3BsopP01rPWNwSevZu8a6tuD 0blBk2imYfJ+ZbORJutwly+wMau3/6dsGcdGpzs8ZzcFqCWuDn6tFJM3zv7GTFZOOs kTbIHj+9Ll80yzRGQRDyhOlzsR3zZmYR1L1xMND9T7aTvkvl0z4bJa9yqOXd+/O49m ZIuTTgFLMyZrRVt/wSy2o8KW2nAS3fL0u3789MOGVdNRQ+FdQCBuOr73ja+GRh2B88 U5h83gym1kNrA== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id E75375E439; Mon, 1 Dec 2025 10:08:46 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:56 -0700 Message-ID: <20251201170529.3237986-18-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: P6MBNPFVC5SWXOXYI3F7CWDCW7IB4NME X-Message-ID-Hash: P6MBNPFVC5SWXOXYI3F7CWDCW7IB4NME X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Simon Goldschmidt , Tom Rini , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 17/37] dlmalloc: calloc: fix zeroing early allocations List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass When full malloc is enabled and SYS_MALLOC_F is also enabled, the simple pre-reloc heap is used before relocation. The calloc_must_clear macro relies on chunk metadata which does not exist for simple malloc allocations. Use memset directly to zero out memory from simple malloc. Changes from original commit: - Port to dlcalloc() in dlmalloc 2.8.6 - Update memset() second arg to be a char Signed-off-by: Simon Goldschmidt Reviewed-by: Tom Rini Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from bb71a2d9dcd9c53aa4d4b8e4d26c24d9b59b74c3) --- common/dlmalloc.c | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index d53cbf2f2e1..a07166206dc 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -4877,6 +4877,15 @@ void* dlcalloc(size_t n_elements, size_t elem_size) { req = MAX_SIZE_T; /* force downstream failure on overflow */ } mem = dlmalloc(req); +#ifdef __UBOOT__ +#if CONFIG_IS_ENABLED(SYS_MALLOC_F) + /* For pre-reloc simple malloc, just zero the memory directly */ + if (mem != 0 && !(gd->flags & GD_FLG_FULL_MALLOC_INIT)) { + memset(mem, '\0', req); + return mem; + } +#endif +#endif if (mem != 0 && calloc_must_clear(mem2chunk(mem))) memset(mem, 0, req); return mem; From patchwork Mon Dec 1 17:04:57 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 793 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608936; bh=mzRckNuCPzkkCD6kGGdZR/F3b1EiL3dZAchn5j2khfw=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=HYauKE3rZEIO4C9rOW3JkQlY3rASjfr8OGy3SXMtPZ30OSOYoLQp7vPnMCeV5ci8u 0m56pvYmOSsEytEWeg2FMJGy85fiT2ZdP2RdsSHW6h6M5KOu8Zbw57NXzxCP9UVwTO oYdcyorTj0aOsT8eJqYrJ9/niR+01+R+2ltPl9vHRr1cYSjvI8GZek9tD2NKYhMbXt xs2/vRkTRL9/IAT5v92N9RephEGODvPyBG66wpF75FaNuxqPVrG24pV0LNDGccx+Hl KD/4Gpfq9NuNUznSoMQgcBC4ijxHJzIfb3GVcphUS5tY5Hb3KmjW625A52ySmiJadl pcJYKI82NwXUg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 868656888B for ; Mon, 1 Dec 2025 10:08:56 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id npTRvhG7MHJ3 for ; Mon, 1 Dec 2025 10:08:56 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608934; bh=mzRckNuCPzkkCD6kGGdZR/F3b1EiL3dZAchn5j2khfw=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=ASOF7un8jclP/k6AW2FWhTsBfQobc9hZhfTtgp6IZfhfLbGzdYB+pA6bJvDfzRTSB H+vpCR8wZbF1+aK7oDiMm1zURLa5+s1gAkkbL7CIF/BEadH2yUNQZUYsB4REHWBjHs DhBs96ILXzbgUz+9T+biZ8bbvWUbe6BizxU8MspYiPg+cktGSxMDv02Dk9sfnb8aEs C50CAt6Cddjqxxaclg23tD8/r7KQ2mM7AaSaw2eRdikUszvRjOO9uO547LrXVGlZ7C d4AwXzrrmChO71pPAp6OuxrnqPepz/VfwWsVVUab8FCibSxE+PkObbpQRnvMGjSwms bLrlx32V2AteQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 93809688D7 for ; Mon, 1 Dec 2025 10:08:54 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608932; bh=rVqFcaRcLzgNM5XH7EyjPnXT1QBn+mmz9lOTYKn4U30=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=J9GOlcUSvu0D9CXWCUNnYnAnJOlxtpf3OgBTWgpUFD3ppoCDIhAs+NNwEV7UoJ6Ge QSHnlW5OlC5a4DRS+CRcvm3jDXsUvF5sGkAdbwZmcVY3FU7JuUM6E59qsDiq9Q2C9h c+MiiLcZ1kl1e5brxaVBDwmE5Whs4lqBIF2vCKGip+6sLzsmvfEqfM0YGvqHrkmJpb TGpLnNgMqw/tGcZ8kWrT05iNdBxlQ1+JPqYtESiUdcIj5ye4BXvPScjmM3qm10H+Jc mrC9B9ItiVObOg+IkRM54YDUblsEhTeHTrDSauyoMvR6iTZpe6/3dftIMnQ+oSccYx aOpVCyMFL/SXA== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 6011C5E439; Mon, 1 Dec 2025 10:08:52 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id hJI1hWowLjsN; Mon, 1 Dec 2025 10:08:52 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608932; bh=MEBS1WYHw61Qxw9vFbggvMa/qglecxKF598rsm1TyUo=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=kHQDXNi31bws1NBFWwiU/HGLWTjdJDR3k3oqLzH2gX+uYUpZRXtBdxoT0T4KOp8Ps 3Q58K7yA1rJyCt2M40YM3zmweZjzF54RY9j3GPRNIhE6wpJ0Gr9NMD0ZK5+gywfIka i5duX4DQBWAw3zcX1KqgOvKrxH4lO3lH/rU7bt3+yKPtsppaRyEizBbuduyXqfEBkp b/MXvoSM1L/Hj3H71IDjx4+jI3QOsrtuLd0P02Z2Y3FZ+8OaxiXjb6GwODW2msrObt RXQ48cPUOcDOrNGfJ/VbNc+WiXr6QEcMgLTqifmdyBJ4sCspza/mhb/mK7W8TskvyS /p9H8n7+EouUQ== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id B36D268821; Mon, 1 Dec 2025 10:08:51 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:57 -0700 Message-ID: <20251201170529.3237986-19-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: 57ZNGFFL5VTRE6E7XQMZCGXKA4RO3NZD X-Message-ID-Hash: 57ZNGFFL5VTRE6E7XQMZCGXKA4RO3NZD X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 18/37] sandbox: Use a prefix for all allocation functions List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass In order to allow use of both U-Boot's malloc() and the C library's version, set a prefix for the allocation functions so that they can co-exist. This is only done for sandbox. For other archs everything remains the same. Signed-off-by: Simon Glass Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from cfda60f99ae237494e9341aad9676152d3bac3c9) --- include/malloc.h | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/include/malloc.h b/include/malloc.h index d5cccc96e50..76068032da7 100644 --- a/include/malloc.h +++ b/include/malloc.h @@ -60,6 +60,14 @@ extern "C" { #if !ONLY_MSPACES +/* + * Rename the U-Boot alloc functions so that sandbox can still use the system + * ones + */ +#ifdef CONFIG_SANDBOX +#define USE_DL_PREFIX +#endif + #ifndef USE_DL_PREFIX #define dlcalloc calloc #define dlfree free @@ -82,6 +90,21 @@ extern "C" { #define dlindependent_calloc independent_calloc #define dlindependent_comalloc independent_comalloc #define dlbulk_free bulk_free +#else /* USE_DL_PREFIX */ +/* Ensure that U-Boot actually uses dlmalloc versions */ +#define calloc(n, s) dlcalloc(n, s) +#define free(p) dlfree(p) +#define malloc(s) dlmalloc(s) +#define memalign(a, s) dlmemalign(a, s) +#define posix_memalign(p, a, s) dlposix_memalign(p, a, s) +#define realloc(p, s) dlrealloc(p, s) +#define valloc(s) dlvalloc(s) +#define pvalloc(s) dlpvalloc(s) +#define mallinfo() dlmallinfo() +#define mallopt(p, v) dlmallopt(p, v) +#define malloc_trim(s) dlmalloc_trim(s) +#define malloc_stats() dlmalloc_stats() +#define malloc_usable_size(p) dlmalloc_usable_size(p) #endif /* USE_DL_PREFIX */ #if !NO_MALLINFO From patchwork Mon Dec 1 17:04:58 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 794 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608941; bh=Xax6axDTz8aryleweoviPvO/7+WJUF3BsacfymTQOn8=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=WxcI7KhgenF4C9PVkSfXMLHgqLkVB/RJhmy/yPUlkdxSqAjK6ZX/TqnzgsGQe2ltU qqgP/JoAOcDER5+mGTNI3VezRQCljeDC3yQ3G1z+8HwoGP2KmHGyeAi4XMxog8tCSh fEmDDouKcuXMxcUPmQXMPjIJzCOpPqjdc3oCaH8bZFfV48ssve5mwDw613wkw6D3uk PxVALM5yKc74r0W/PQHe/KzRxot+yu0Br/SXG6o1QLaXzKujhsn6hMiBzT82s6vxsH BJQTtaYQgccoXxbnX5Y2CvpyhPOu9wQ1/8jmRv+FR+gHsgyaSrinrp2qmAfmOWMbVf gatPOX+qnqD5g== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 339B5688C0 for ; Mon, 1 Dec 2025 10:09:01 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id ofK5pto3sib1 for ; Mon, 1 Dec 2025 10:09:01 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608941; bh=Xax6axDTz8aryleweoviPvO/7+WJUF3BsacfymTQOn8=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=WxcI7KhgenF4C9PVkSfXMLHgqLkVB/RJhmy/yPUlkdxSqAjK6ZX/TqnzgsGQe2ltU qqgP/JoAOcDER5+mGTNI3VezRQCljeDC3yQ3G1z+8HwoGP2KmHGyeAi4XMxog8tCSh fEmDDouKcuXMxcUPmQXMPjIJzCOpPqjdc3oCaH8bZFfV48ssve5mwDw613wkw6D3uk PxVALM5yKc74r0W/PQHe/KzRxot+yu0Br/SXG6o1QLaXzKujhsn6hMiBzT82s6vxsH BJQTtaYQgccoXxbnX5Y2CvpyhPOu9wQ1/8jmRv+FR+gHsgyaSrinrp2qmAfmOWMbVf gatPOX+qnqD5g== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 217F2688AE for ; Mon, 1 Dec 2025 10:09:01 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608938; bh=qwA7RuxfaRb+68FdBibSeYCqCLpCmA/G96vIzQykApI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tF0s1tKrLU+RzFRjaagAnf4x3GTmegdnOzzs7fb7I+ryUGRDdI2quY3kSXY9gow+t t0I4xRqwut3BpIDkupfjn5lAFFaEYb3pdIzOiR+SmNqqo1Q3HPhSJz9UoFZMT2nmQr D2EnwX7lPjjd9nzxnNltRsY0Zhlbxy3hhwhnJHote2dTIiSwTxMb7ZhCVt5jhJpBNP HjQ3Qjfh3jUmpHndL1VjrlFPvRGnAm6tKaoJoesqPSs8RCJsqJVQK9a+aWsm+AlV5Y HF8ASLj6VMl3Tniz3zqMDXC9wpmTw7Ptlzx5pYVZtZS3lo97BTMWu/747Y3Bu0vqbL 3RzS5jVt61KPw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id CD2DB688A4; Mon, 1 Dec 2025 10:08:58 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id qMK8_LkzCkH4; Mon, 1 Dec 2025 10:08:58 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608933; bh=vc12dkkAbN4OpLw4Ily1hRI86FhJX/AlnDkKcpxFjSs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=A2erG7iXHEnS6QFhbc9dM+CREOuq2cxtdFr9DXy3sQvL6QeiUmTEnWLmiKI2184EB fQgT5C6rEss28RbzrPfu5WFRiKnnr5brh70wUSWGVISujTDyDLcJ9PPlHjkX5gZknL /2AU2/sS6e2/t3/Pp7rWrycRjTgUYqVZIHf7xGK/dWEkRrFKTDusv/cRAT07gWsucL j6cSh+q6P4OCvJvoInhQs07j8M9j9oaktXi6mFhLnVTQzdSL7oPpoVJ/yxYjeoV6rP YPm4pmkXUXPuTLm70Fm/r5zJlCXIpYqyrv9GQVUPlySUsoAZ1GT2vuizNrjM7pKxhT f+2+CsIirRNyg== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id A653768821; Mon, 1 Dec 2025 10:08:52 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:58 -0700 Message-ID: <20251201170529.3237986-20-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: ULX5Y563LKTO666U2XQALNEJ5FNNWDYT X-Message-ID-Hash: ULX5Y563LKTO666U2XQALNEJ5FNNWDYT X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Sean Anderson , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 19/37] malloc: Annotate allocator for valgrind List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add valgrind annotations to track memory allocations: - VALGRIND_MALLOCLIKE_BLOCK in dlmalloc() at the postaction label - VALGRIND_FREELIKE_BLOCK in dlfree() for both pre-reloc and post-reloc paths - VALGRIND_RESIZEINPLACE_BLOCK/VALGRIND_MAKE_MEM_DEFINED in dlrealloc() and dlrealloc_in_place() when resizing in place - VALGRIND_MALLOCLIKE_BLOCK/VALGRIND_FREELIKE_BLOCK in dlrealloc() when allocating new memory Changes from original commit: - The new dlmalloc 2.8.6 uses a centralized "postaction" label pattern instead of multiple return points, allowing simpler annotation - Annotations placed at strategic points covering all allocation paths - dlrealloc_in_place() is a new function that needs annotations Signed-off-by: Sean Anderson Reviewed-by: Simon Glass Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit bdaeea1b6863b0ec80f2d4bc15d50b8d16efa708) --- common/dlmalloc.c | 29 ++++++++++++++++++++++++++++- 1 file changed, 28 insertions(+), 1 deletion(-) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index a07166206dc..9298fc445e4 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -588,6 +588,7 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP #include #include #include +#include DECLARE_GLOBAL_DATA_PTR; @@ -4743,6 +4744,10 @@ void* dlmalloc(size_t bytes) { postaction: POSTACTION(gm); +#ifdef __UBOOT__ + if (mem) + VALGRIND_MALLOCLIKE_BLOCK(mem, bytes, SIZE_SZ, false); +#endif return mem; } @@ -4755,8 +4760,10 @@ void dlfree(void* mem) { #ifdef __UBOOT__ #if CONFIG_IS_ENABLED(SYS_MALLOC_F) /* free() is a no-op - all the memory will be freed on relocation */ - if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) + if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) { + VALGRIND_FREELIKE_BLOCK(mem, SIZE_SZ); return; + } #endif #endif /* @@ -4778,6 +4785,9 @@ void dlfree(void* mem) { #endif /* FOOTERS */ if (!PREACTION(fm)) { check_inuse_chunk(fm, p); +#ifdef __UBOOT__ + VALGRIND_FREELIKE_BLOCK(mem, SIZE_SZ); +#endif if (RTCHECK(ok_address(fm, p) && ok_inuse(p))) { size_t psize = chunksize(p); mchunkptr next = chunk_plus_offset(p, psize); @@ -5349,12 +5359,25 @@ void* dlrealloc(void* oldmem, size_t bytes) { if (newp != 0) { check_inuse_chunk(m, newp); mem = chunk2mem(newp); +#ifdef __UBOOT__ + if (mem == oldmem) { + VALGRIND_RESIZEINPLACE_BLOCK(oldmem, 0, bytes, SIZE_SZ); + VALGRIND_MAKE_MEM_DEFINED(oldmem, bytes); + } else { + VALGRIND_MALLOCLIKE_BLOCK(mem, bytes, SIZE_SZ, false); + VALGRIND_FREELIKE_BLOCK(oldmem, SIZE_SZ); + } +#endif } else { mem = internal_malloc(m, bytes); if (mem != 0) { size_t oc = chunksize(oldp) - overhead_for(oldp); memcpy(mem, oldmem, (oc < bytes)? oc : bytes); +#ifdef __UBOOT__ + VALGRIND_MALLOCLIKE_BLOCK(mem, bytes, SIZE_SZ, false); + VALGRIND_FREELIKE_BLOCK(oldmem, SIZE_SZ); +#endif internal_free(m, oldmem); } } @@ -5387,6 +5410,10 @@ void* dlrealloc_in_place(void* oldmem, size_t bytes) { if (newp == oldp) { check_inuse_chunk(m, newp); mem = oldmem; +#ifdef __UBOOT__ + VALGRIND_RESIZEINPLACE_BLOCK(oldmem, 0, bytes, SIZE_SZ); + VALGRIND_MAKE_MEM_DEFINED(oldmem, bytes); +#endif } } } From patchwork Mon Dec 1 17:04:59 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 795 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608945; bh=q0sGYI3sZ6sobHqMa5+JTu+oKHFPrrvgBBUK3lUrGmU=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=euMA9hFmSCdk11rL9sUBfDaDdl/4nbFbbWy9bbD4ImVk+JPPviVK89ykruDcpNUIR 34obQiKuBxY5QI0olAXzVrZcW6BrXYl0j0t6McvvdELy4XjcxY9kEEcMnDjmBZLMlq n+Gc83I5Rfe38BOqGZNqIU7ALe3Ju2V1y3C3orQzuXJpz47tJJf37wG+Fs3GHMM+GB r68hp81uzKiIILqLrp/anq1IBMjL4mgb/Db7k2V2ZpVqt0wIS0qLkQ4CYHw8YZfKPY 7c66StS8nvdOgCq1YLFXEyqk1XK499X3YnpqVhUTay24ozOzRRLXNK0D3QL3Ar3Oi8 vWtF6cuPM6v3Q== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id EA64B688A4 for ; Mon, 1 Dec 2025 10:09:05 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id M6GDgmNuvDdR for ; Mon, 1 Dec 2025 10:09:05 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608945; bh=q0sGYI3sZ6sobHqMa5+JTu+oKHFPrrvgBBUK3lUrGmU=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=euMA9hFmSCdk11rL9sUBfDaDdl/4nbFbbWy9bbD4ImVk+JPPviVK89ykruDcpNUIR 34obQiKuBxY5QI0olAXzVrZcW6BrXYl0j0t6McvvdELy4XjcxY9kEEcMnDjmBZLMlq n+Gc83I5Rfe38BOqGZNqIU7ALe3Ju2V1y3C3orQzuXJpz47tJJf37wG+Fs3GHMM+GB r68hp81uzKiIILqLrp/anq1IBMjL4mgb/Db7k2V2ZpVqt0wIS0qLkQ4CYHw8YZfKPY 7c66StS8nvdOgCq1YLFXEyqk1XK499X3YnpqVhUTay24ozOzRRLXNK0D3QL3Ar3Oi8 vWtF6cuPM6v3Q== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id D63616881A for ; Mon, 1 Dec 2025 10:09:05 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608943; bh=fZLGrjwc1OB0lH8DPXmp6A3Iv7m0rMDDd2wx87i00jU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=NOvRsWEW6UhzHKr/bSvdxWh96ktZ1+z0Oszs3tdCM0oCopEiQDpWPjwotIRA4U1lI BuBIFE8cJDfeguZUjLCKYovSL8MLnSirCAF7OKTN2SAZc7zplPnGXa49mkCMgYlbaG b0b6S+6okNIkBBYz2h4YfkTIzZWFcbQo5SzSha4QBHy5StZlEKSrwjZdNr3XmzZfjl ovQdXcPbea+Y8uYEvnsb5abK2d0aRZYLUvHYOoqgbzDWWLaHAvWVO7lwd9VgHsxj+U HnTgxayJJ8gO0oYEH8yTiuiJPahxP1fv+ZVR5Xc0bZYV1KE6W1c+mtmM13GeAKXCCR 3TxVMWOBvXolQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id B3330688AE; Mon, 1 Dec 2025 10:09:03 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id OUGZmXnwJhyM; Mon, 1 Dec 2025 10:09:03 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608937; bh=67AW5ysHvAULAOBuDT7o8sXsI3iQLbvvKQBclX3Gi1s=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=guzXUR9O8tNFPsWtnI+3JLG/L7vCJbS4p5giRr1Cfi3ljCPVIX8/qPdk2kNwg+R/P Ir2zDJdD+0paOP3QHBgVtCMt0pUt1hWWAnGEMg2RvYcwKHZziQOyzEJItpjfyNdyzN IGK3FwLR7uWyV/5LK9ndXPle+19B3znhxVhwbelYuF7qgA1eEhxNdlkhGO9IJShnJs fTCsDYIPApjf6KLOyYXHikNDFzszkyvKa3MS5EeCFAh/ioErZlz1uhwqFXQZ3f7nm7 TdO5at7Q4kp9lJNc/T6OnWjoGIzpIio1mwzVrDFFAkPxMGRg9Ub0Yfpp13oyVv97AT 2a4z3nUhHcsGA== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 753885E439; Mon, 1 Dec 2025 10:08:57 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:04:59 -0700 Message-ID: <20251201170529.3237986-21-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: KASLEDUDQPMGPU62XJXCXLVIERT6NWWH X-Message-ID-Hash: KASLEDUDQPMGPU62XJXCXLVIERT6NWWH X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Simon Glass , Sean Anderson , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 20/37] test: Support testing malloc() failures List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass It is helpful to test that out-of-memory checks work correctly in code that calls malloc(). Add a simple way to force failure after a given number of malloc() calls. Also add support for realloc() testing (from commit 04894f5ad53). Changes from original commits: - Variable declarations moved to top of U-Boot section (before dlmalloc()) - Adapted to new dlmalloc function names (dlmalloc/dlrealloc vs mALLOc/rEALLOc) Signed-off-by: Simon Glass Reviewed-by: Sean Anderson Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit 62d638386c17d17b929ad10956c7f60825335a4e) (cherry picked from commit 04894f5ad53cab0ee03eb3bc1cc1682e22f5dd1b) --- common/dlmalloc.c | 24 ++++++++++++++++++++++++ include/malloc.h | 12 ++++++++++++ 2 files changed, 36 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 9298fc445e4..aacc9b5db3b 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -592,6 +592,9 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP DECLARE_GLOBAL_DATA_PTR; +static bool malloc_testing; /* enable test mode */ +static int malloc_max_allocs; /* return NULL after this many calls to malloc() */ + ulong mem_malloc_start; ulong mem_malloc_end; ulong mem_malloc_brk; @@ -4614,6 +4617,11 @@ void* dlmalloc(size_t bytes) { /* Return NULL if not initialized yet */ if (!mem_malloc_start && !mem_malloc_end) return NULL; + + if (CONFIG_IS_ENABLED(UNIT_TEST) && malloc_testing) { + if (--malloc_max_allocs < 0) + return NULL; + } #endif /* Basic algorithm: @@ -5328,6 +5336,10 @@ void* dlrealloc(void* oldmem, size_t bytes) { panic("pre-reloc realloc() is not supported"); } #endif + if (CONFIG_IS_ENABLED(UNIT_TEST) && malloc_testing) { + if (--malloc_max_allocs < 0) + return NULL; + } #endif void* mem = 0; if (oldmem == 0) { @@ -6491,6 +6503,17 @@ void mem_malloc_init(ulong start, ulong size) #endif } +void malloc_enable_testing(int max_allocs) +{ + malloc_testing = true; + malloc_max_allocs = max_allocs; +} + +void malloc_disable_testing(void) +{ + malloc_testing = false; +} + int initf_malloc(void) { #if CONFIG_IS_ENABLED(SYS_MALLOC_F) @@ -6501,4 +6524,5 @@ int initf_malloc(void) return 0; } + #endif /* __UBOOT__ */ diff --git a/include/malloc.h b/include/malloc.h index 76068032da7..72db7fdb507 100644 --- a/include/malloc.h +++ b/include/malloc.h @@ -658,6 +658,18 @@ extern ulong mem_malloc_start; extern ulong mem_malloc_end; extern ulong mem_malloc_brk; +/** + * malloc_enable_testing() - Enable malloc failure testing + * + * @max_allocs: Number of allocations to allow before returning NULL + */ +void malloc_enable_testing(int max_allocs); + +/** + * malloc_disable_testing() - Disable malloc failure testing + */ +void malloc_disable_testing(void); + /** * mem_malloc_init() - Initialize the malloc() heap * From patchwork Mon Dec 1 17:05:00 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 796 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608950; bh=adMada9xJXFYclOvVbTDIPTdIdO1ESxsrbhOTG8dpdw=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=MGS+US5ScBPKF1XzZs1IDof+ZQSsG1wuPaLE1JmgEKZOprOLPO8yZ5GmBPnqZ8ex4 MJwAxA7k89hHOOIEthg8qpP5vmskCXEP3YjTFpmpNNU3LKv1vrJLOXcGdVk9kevYgC WLpRgbR2TbAfVm7Crdc/byudyfea5AIYKi4c7RWzJSP/x5im5yaMyMRlipD1MveX+1 /j3dfBXsopxuQKu6nPfIjV/hOzaLzDqrsIwcZ8zYd07JILO8ShbykG/jB3V/7211zg poTCbSvhqEyo9iF7XYJ7Yu4ZTTcoY3Eua2dflpwX0P7+AfIFuEXKqZumXjxac2tLdO 02AP75Hqrlltw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 7C697688C5 for ; Mon, 1 Dec 2025 10:09:10 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id stqyvbmejDjf for ; Mon, 1 Dec 2025 10:09:10 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608950; bh=adMada9xJXFYclOvVbTDIPTdIdO1ESxsrbhOTG8dpdw=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=MGS+US5ScBPKF1XzZs1IDof+ZQSsG1wuPaLE1JmgEKZOprOLPO8yZ5GmBPnqZ8ex4 MJwAxA7k89hHOOIEthg8qpP5vmskCXEP3YjTFpmpNNU3LKv1vrJLOXcGdVk9kevYgC WLpRgbR2TbAfVm7Crdc/byudyfea5AIYKi4c7RWzJSP/x5im5yaMyMRlipD1MveX+1 /j3dfBXsopxuQKu6nPfIjV/hOzaLzDqrsIwcZ8zYd07JILO8ShbykG/jB3V/7211zg poTCbSvhqEyo9iF7XYJ7Yu4ZTTcoY3Eua2dflpwX0P7+AfIFuEXKqZumXjxac2tLdO 02AP75Hqrlltw== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 6D138688B8 for ; Mon, 1 Dec 2025 10:09:10 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608948; bh=TacCIdwtcSUbkbFhAeC0sk9g8uhR9FENlrNTJ1do3bc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QMs1/GHi2pCDXj3RRiZd6V/o+R4fpPNtR0KJtVExFcYYel661R0B+Oh5MWgSgKm7O DTbYCiOrDOEYrSqXf0xT4k/hof/BhYZGjXIFUtWqbnpoQdvXZ1vAkWAx2A3caoWnM6 uQFWH/brNt/B/EgfNT2VtlFXDvQVeIUJvb6dC/fuHbX7+Cd8c2dWGtsGMWHDlyk2uq mbpSbkSpjQSf6IacI8Uov90YU9rxHj8N62leSG2oYegxRApVrf/ibw4pf8zHBIYpZE iMjxdJO4EKmyPJ976HbJM+eWaH9Q1oUGoAbloah9dSN9O9/I0EvPh4i5HJQlWJVw+7 GNOLEmR7eSLwg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 4F585688CD; Mon, 1 Dec 2025 10:09:08 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id mLCP4fEtrjBV; Mon, 1 Dec 2025 10:09:08 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608942; bh=bxH59opIv/3Sg4QZ9LZsI1yePulUyPQowHeABT+oh64=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ogmD9lRCzBjNh/ZCLndWDc7bTDP8X3WIHNEg6v0LGHxmXjeXiy+1l/lSsD6Ih0RY8 0U/+1ceviZVAAqqnkL8rCenFgBFSMvUgv/GJWn8Izo6ZlhEhetge4KIAP+wziy4q1O EPLJ1swoFPxIPt/LoxAdVknCKFjZoy8LBsoxQa2rABW7f6CjTLtTc/IFBBGKn05S0W 56lXlHYvuxOQ46STylnz8R9nzTAIuquA1xbBYdVxGKdglqDJre9MGbUpmPT3V60A+Z nuYFyy3La8WZPhska8WULWYPgL9qzHibscD69sSleOzTG2JNsAorIx0ThPhGUCwFXY r2ll1HNIV4Ehw== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 3FD5A688C0; Mon, 1 Dec 2025 10:09:02 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:00 -0700 Message-ID: <20251201170529.3237986-22-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: GDCP2PA4RFX3QWGMEHT4DPORPPAL5FCF X-Message-ID-Hash: GDCP2PA4RFX3QWGMEHT4DPORPPAL5FCF X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Tom Rini , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 21/37] dlmalloc: Fix a warning with clang-15 List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add (void) to dlmalloc_stats() function definition to match its declaration and avoid the clang-15 warning about function declarations without prototypes. Signed-off-by: Tom Rini Reviewed-by: Simon Glass Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit f88d48cc74f0e78b14fed812101d94de65e43802) --- common/dlmalloc.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index aacc9b5db3b..03fd902c9f6 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -5553,7 +5553,7 @@ struct mallinfo dlmallinfo(void) { #endif /* NO_MALLINFO */ #if !NO_MALLOC_STATS -void dlmalloc_stats() { +void dlmalloc_stats(void) { internal_malloc_stats(gm); } #endif /* NO_MALLOC_STATS */ From patchwork Mon Dec 1 17:05:01 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 797 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608954; bh=TLg6OxpCW/e0twV4/ttc+l0N/2EVu6h7pXYpwXQQb6c=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=CKczbJSwJ8kCr+vJP1mjESqlepQ9caZ3DD5DIWNvO2RFq8eiFm1kyiYuPo2la6cF9 IddBHiTdEP4lQDmNB1pzw82+LoOf5yij4P5wFPZLx6M/V5Df9RnT+Fkx9m5XTlXrQQ bmAOJ7eYSJs7opezA/D1y7u4FDARdwSn/+mq2DEtexEh87dMGHXXRq8YRMSM4TyzMG w2NgDazG1U32itr2rJNnqzv8WxoxUppi8vyo6I679673Ix0LLbhc4pKPvuUNEJqQ5s H0vN21CBNUxDLR3Oawnt2ruRibya7zT2Uix9z4BmmdxACBSfJd+i2c50ABc9y8qb2u 4E/8y6cCr8DHw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id E0F26688C5 for ; Mon, 1 Dec 2025 10:09:14 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id KBZ5RDEzcg3g for ; Mon, 1 Dec 2025 10:09:14 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608954; bh=TLg6OxpCW/e0twV4/ttc+l0N/2EVu6h7pXYpwXQQb6c=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=CKczbJSwJ8kCr+vJP1mjESqlepQ9caZ3DD5DIWNvO2RFq8eiFm1kyiYuPo2la6cF9 IddBHiTdEP4lQDmNB1pzw82+LoOf5yij4P5wFPZLx6M/V5Df9RnT+Fkx9m5XTlXrQQ bmAOJ7eYSJs7opezA/D1y7u4FDARdwSn/+mq2DEtexEh87dMGHXXRq8YRMSM4TyzMG w2NgDazG1U32itr2rJNnqzv8WxoxUppi8vyo6I679673Ix0LLbhc4pKPvuUNEJqQ5s H0vN21CBNUxDLR3Oawnt2ruRibya7zT2Uix9z4BmmdxACBSfJd+i2c50ABc9y8qb2u 4E/8y6cCr8DHw== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id D1658688AE for ; Mon, 1 Dec 2025 10:09:14 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608952; bh=zQMzsKXAPVHjHnF83W4n/migH+i85hLFoi7HE5LQXD0=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=E/9rCWALQYQZa3Zxtv9Bu184lgS7VvF6u/whzgXhz3XFMb0dmvAF8XYNKMtsGJdtR Zx4GixTPGKBJoIH4P0VsvSBr4mJ8O8NPtmdQV7gEo6yl0LmM0z+a0HSxg8F8MrQwjs CxoU5XBnQUlk1+TMdYmcZ+KGOTxt4JU/HhzeEz1mIvxcsmKQ48kSnMhHQSMDsvOr7/ aOVt8gkGyPKnCu/kpdhYNbuwKdtUbxHwRr2DslB691tK7h1ZJ0hxjNk0qRWkox7mFk kIWX5F3lOVbdAzajROH4/OpbIsBy837l+Sl7Jp7ZkQUySgnJiJ1SkQg5Q7Em9YkvnW 5uIUJWGsU4h7A== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id D6706688B6; Mon, 1 Dec 2025 10:09:12 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id CaLuHVTDSdpS; Mon, 1 Dec 2025 10:09:12 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608947; bh=ImhoZbX42cEFK5qrDhXRRq8KancseLLwY/vLoRAWvXg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=p0i4nY6XsTU4mE9A7wu5AhmoICBg9xNWFCfB3mLhVCqRLNL/alC+mvOBHjnQInjYo zM0+NGKQ+MYfWlhSL2Sjo3vanQ/6cBCz1TMDO7japRhqEtXJ4YNVGDkBNd9eP9vXqb lr4yK9FNzT3xGOWRXtbmH0mAQ+FhujKZrLxN56gip9plZLLmkGZRX1xWClHaPxVYcn /INBvkjld/SS24Hr+7rrKm1BxJ33u3AiHLo2EoKubEfJ+q8UTXumWXZ7k2WmNyrxGm RoodUEBpX6clOftIfZ2j7OndlrSfghRzlQ2GzE3o1oOBdjvAls9QnwjuyH1wn7paHx IQKf8sZJl6Ubg== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 15D3A688A4; Mon, 1 Dec 2025 10:09:07 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:01 -0700 Message-ID: <20251201170529.3237986-23-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: E7XCXPQLNPBC4JWTJPNUBJSBWEFUJGMJ X-Message-ID-Hash: E7XCXPQLNPBC4JWTJPNUBJSBWEFUJGMJ X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Eugene Uriev , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 22/37] mcheck: prepare +1 tier for mcheck-wrappers, in dl-*alloc commands List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add STATIC_IF_MCHECK macro and rename the main allocation functions to *_impl versions (dlmalloc_impl, dlfree_impl, dlrealloc_impl, dlmemalign_impl, dlcalloc_impl) to prepare for mcheck wrappers. When MCHECK_HEAP_PROTECTION is not defined, the *_impl macros map directly to the original function names, so behavior is unchanged. Changes from original commit: - Adapted to new dlmalloc 2.8.6 function names (dl* vs mALLOc, etc.) - Updated all internal calls to use *_impl versions Signed-off-by: Eugene Uriev Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit c82ff481159d2cf7e637c709df84883e09bba588) --- common/dlmalloc.c | 40 ++++++++++++++++++++++++++++------------ 1 file changed, 28 insertions(+), 12 deletions(-) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 03fd902c9f6..972cadd2e2f 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -557,6 +557,17 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP #define DEBUG 1 #endif +#ifdef MCHECK_HEAP_PROTECTION +#define STATIC_IF_MCHECK static +#else +#define STATIC_IF_MCHECK +#define dlmalloc_impl dlmalloc +#define dlfree_impl dlfree +#define dlrealloc_impl dlrealloc +#define dlmemalign_impl dlmemalign +#define dlcalloc_impl dlcalloc +#endif + #define LACKS_FCNTL_H #define LACKS_UNISTD_H #define LACKS_SYS_PARAM_H @@ -4607,7 +4618,8 @@ static void* tmalloc_small(mstate m, size_t nb) { #if !ONLY_MSPACES -void* dlmalloc(size_t bytes) { +STATIC_IF_MCHECK +void* dlmalloc_impl(size_t bytes) { #ifdef __UBOOT__ #if CONFIG_IS_ENABLED(SYS_MALLOC_F) if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) @@ -4764,7 +4776,8 @@ void* dlmalloc(size_t bytes) { /* ---------------------------- free --------------------------- */ -void dlfree(void* mem) { +STATIC_IF_MCHECK +void dlfree_impl(void* mem) { #ifdef __UBOOT__ #if CONFIG_IS_ENABLED(SYS_MALLOC_F) /* free() is a no-op - all the memory will be freed on relocation */ @@ -4885,7 +4898,8 @@ void dlfree(void* mem) { #endif /* FOOTERS */ } -void* dlcalloc(size_t n_elements, size_t elem_size) { +STATIC_IF_MCHECK +void* dlcalloc_impl(size_t n_elements, size_t elem_size) { void* mem; size_t req = 0; if (n_elements != 0) { @@ -4894,7 +4908,7 @@ void* dlcalloc(size_t n_elements, size_t elem_size) { (req / n_elements != elem_size)) req = MAX_SIZE_T; /* force downstream failure on overflow */ } - mem = dlmalloc(req); + mem = dlmalloc_impl(req); #ifdef __UBOOT__ #if CONFIG_IS_ENABLED(SYS_MALLOC_F) /* For pre-reloc simple malloc, just zero the memory directly */ @@ -5328,7 +5342,8 @@ static void internal_inspect_all(mstate m, #if !ONLY_MSPACES -void* dlrealloc(void* oldmem, size_t bytes) { +STATIC_IF_MCHECK +void* dlrealloc_impl(void* oldmem, size_t bytes) { #ifdef __UBOOT__ #if CONFIG_IS_ENABLED(SYS_MALLOC_F) if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) { @@ -5343,14 +5358,14 @@ void* dlrealloc(void* oldmem, size_t bytes) { #endif void* mem = 0; if (oldmem == 0) { - mem = dlmalloc(bytes); + mem = dlmalloc_impl(bytes); } else if (bytes >= MAX_REQUEST) { MALLOC_FAILURE_ACTION; } #ifdef REALLOC_ZERO_BYTES_FREES else if (bytes == 0) { - dlfree(oldmem); + dlfree_impl(oldmem); } #endif /* REALLOC_ZERO_BYTES_FREES */ else { @@ -5433,7 +5448,8 @@ void* dlrealloc_in_place(void* oldmem, size_t bytes) { return mem; } -void* dlmemalign(size_t alignment, size_t bytes) { +STATIC_IF_MCHECK +void* dlmemalign_impl(size_t alignment, size_t bytes) { #ifdef __UBOOT__ #if CONFIG_IS_ENABLED(SYS_MALLOC_F) if (!(gd->flags & GD_FLG_FULL_MALLOC_INIT)) @@ -5441,7 +5457,7 @@ void* dlmemalign(size_t alignment, size_t bytes) { #endif #endif if (alignment <= MALLOC_ALIGNMENT) { - return dlmalloc(bytes); + return dlmalloc_impl(bytes); } return internal_memalign(gm, alignment, bytes); } @@ -5449,7 +5465,7 @@ void* dlmemalign(size_t alignment, size_t bytes) { int dlposix_memalign(void** pp, size_t alignment, size_t bytes) { void* mem = 0; if (alignment == MALLOC_ALIGNMENT) - mem = dlmalloc(bytes); + mem = dlmalloc_impl(bytes); else { size_t d = alignment / sizeof(void*); size_t r = alignment % sizeof(void*); @@ -5473,14 +5489,14 @@ void* dlvalloc(size_t bytes) { size_t pagesz; ensure_initialization(); pagesz = mparams.page_size; - return dlmemalign(pagesz, bytes); + return dlmemalign_impl(pagesz, bytes); } void* dlpvalloc(size_t bytes) { size_t pagesz; ensure_initialization(); pagesz = mparams.page_size; - return dlmemalign(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE)); + return dlmemalign_impl(pagesz, (bytes + pagesz - SIZE_T_ONE) & ~(pagesz - SIZE_T_ONE)); } void** dlindependent_calloc(size_t n_elements, size_t elem_size, From patchwork Mon Dec 1 17:05:02 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 798 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608957; bh=cu1ywqWt3SNx2h+ju7lXM7tQfk6urH7X/1vMGsiBn3g=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=FJEQo8PxoNBADP/OZGJMOvgpWVmxPb+0eIe2IkAv6fzkF6IsYIkTuSJMDwui5QI0P xJe9wbl8bZYwozM7e9cRcEf25EcfYf0Y9XWvlQS33snenL/l6wZ1itW4zmP03Au4HD SMDh+BiSsH0k56fjAkq72aIrlzrkLZmOqEE+V12NGsTO6hz0IRac55EvMM2DYf770u C7n+KcV2pddw1vmOeqdu4Ys90RrgYtLa82AhCYQSpRgb5HRjpmC6xVuLos+WwuS/sv P6JiKh1uWylE/dZKpBf4sGY2EiD467CxVprsOiwa/6lIEn1eBJjyOV8QJNLQGHjV7h bHaMER7C5XMKQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 49F5868821 for ; Mon, 1 Dec 2025 10:09:17 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id lY538m2fJc9E for ; Mon, 1 Dec 2025 10:09:17 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608955; bh=cu1ywqWt3SNx2h+ju7lXM7tQfk6urH7X/1vMGsiBn3g=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=D/UJz9FlvbJEIypfu32vpS8IhpUJsuJQ4E5uzupFAEcSf3XoOfgPZJXbHGhiI3HKb 8kMIpRS9RU3fmRBF48+BbmFDD2AUF/7LnUYVHpuiPFhq3yWBMU2W2QlM6KbcURR5tA QcLe0+IPJ9WDHEC6DLN26dLuSzJjMMoMcFZS9tG7VKZ3WheIty8bTfkzDXeLPiwVfo /SC7ChRwXwVYn3BAeP2Q/EcPdnlNAAKKe0PqWM/oSG5bkRIAEQ1ocxkbbtEg25LQ76 /MUeznfd5xnzuil8KXo6IADdvVJeDzz+O3+/wkKjwvz0urIWBNiXTmib4NrXJbElR6 Okj4y7GPMbAcQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id A4C1C6888B for ; Mon, 1 Dec 2025 10:09:15 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608952; bh=Xmrb21jB3W1e58jyaWy7/8Kf+7pime6nraB86asY1lM=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fEaq0GGmO1Y+v2XVI71E2RFNTHUoIaH5W7xI0VlEp/eXaEaKRoDKoY+FD8LIOIdg7 tz7sTuWfAmryGDEVfEXI0R0mP//EGkwUGJ6aoSzBfnuIGFiGRmXjCKzGJg/906kDFA MqMAcKHUhlruruQzKSTMxlWRevrO795TBj3Ywqp+yGynDFcvitk1AtU8K54FWlujSi Yd2rG9jtoX15gA7HKHT4sUveYBa6bW5hHfPE2VWxJMrt9zqXsVFj6ZW4O2tPC5udjD 8WoAxIOWqr7MVgITHahNjnxbOzPNNW+W5yrg/eqG24DftqfbXFDA+DJPkx9mDKEIcy ZY8LTgaygrBUw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id DFEDC688A4; Mon, 1 Dec 2025 10:09:12 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id iPAzbVaTO7o4; Mon, 1 Dec 2025 10:09:12 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608952; bh=3XDimvYdQFqamRIXlGgwe9RW4kbYXtvdKV+8N9mLtqk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=bvEfiBw31N2GeX3KiM23umddcxWsXWQ3l0jtH5PF7sGShsbwRQhqRqjl7+zy+WNBl XzthJbro71QCGd/23q3qHI3QCN4yps+DnsEhCoQchnAOPzGSri/iOTQDR9/9p78EX6 txGJyFcxjDX7g9fmjQ+2WM8S3ZTNonvbd9jriU/PkiBtXoRJCUzHErj+GcFGemZuT0 xFBosJsudTmaW/Z3BTf0H7mrG2El8TW5vMKBeRonoMfesYx18UINRuNjhw03QWzL/7 k6uQzmNunKwLsMMlfryKSvAbjk1lHIbJWlYAub08jpgqPiHfFjnLN1kk+NyquCXlPm a/xBrVGEhaJMA== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id D185F6888B; Mon, 1 Dec 2025 10:09:11 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:02 -0700 Message-ID: <20251201170529.3237986-24-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: GTF3M4Z6HLAM33GTQRTMR2FEQCZRLZV2 X-Message-ID-Hash: GTF3M4Z6HLAM33GTQRTMR2FEQCZRLZV2 X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Eugene Uriev , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 23/37] mcheck: Use memset/memcpy instead of MALLOC_ZERO/MALLOC_COPY for mcheck. List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass These fast helpers sometimes breach mem-chunk boundaries. Thus they trigger mcheck alarm. Standard ones are accurate though. When MCHECK_HEAP_PROTECTION is enabled, redefine MALLOC_ZERO and MALLOC_COPY to use standard memset/memcpy instead of the optimized versions that may access memory outside allocated chunks. Signed-off-by: Eugene Uriev Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit dfba071ddc3e609e61770b34ab0115fbce05edb2) --- common/dlmalloc.c | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 972cadd2e2f..4f88e48f4b0 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -559,6 +559,10 @@ MAX_RELEASE_CHECK_RATE default: 4095 unless not HAVE_MMAP #ifdef MCHECK_HEAP_PROTECTION #define STATIC_IF_MCHECK static +#undef MALLOC_COPY +#undef MALLOC_ZERO +static inline void MALLOC_ZERO(void *p, size_t sz) { memset(p, 0, sz); } +static inline void MALLOC_COPY(void *dest, const void *src, size_t sz) { memcpy(dest, src, sz); } #else #define STATIC_IF_MCHECK #define dlmalloc_impl dlmalloc From patchwork Mon Dec 1 17:05:03 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 799 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608961; bh=rgEG9HVAl013VhOfYC9kXaY11Yq03pbFMg95pIA6CSI=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=sZgt4PJwo57TNbecleuCKb3Z8ERAhXJenrKTl2ibo420jK9gArneVVtXBlZAqCwwW czVsYu+LJszqdZ6rb/6qu81RbMfvEm5aUkUNTWqt6rGuOcTyIq2iX4MaefaS2WQF+4 Yv8+QWVA8s/UYHSaTrEP1p0CipwGPM5z387+q4xWOQdiMfJxmsWwtv1Yh7TOVEprI4 dOmPKjwJHig9fWv7XUt5+o1n5nY+Fs7mf0a6UffsjZ8SHMhDDifCPDjBN/sU3QibYs acnj94D1/tEwrnZZlcIIvtw+svRaEz9JzE8zfei3+VtvAhwaTBOH3ZNlDzqn7QHNQq mNb9rYcqrbjzQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 43CDD688CD for ; Mon, 1 Dec 2025 10:09:21 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id Vp10ls9gzQmm for ; Mon, 1 Dec 2025 10:09:21 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608961; bh=rgEG9HVAl013VhOfYC9kXaY11Yq03pbFMg95pIA6CSI=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=sZgt4PJwo57TNbecleuCKb3Z8ERAhXJenrKTl2ibo420jK9gArneVVtXBlZAqCwwW czVsYu+LJszqdZ6rb/6qu81RbMfvEm5aUkUNTWqt6rGuOcTyIq2iX4MaefaS2WQF+4 Yv8+QWVA8s/UYHSaTrEP1p0CipwGPM5z387+q4xWOQdiMfJxmsWwtv1Yh7TOVEprI4 dOmPKjwJHig9fWv7XUt5+o1n5nY+Fs7mf0a6UffsjZ8SHMhDDifCPDjBN/sU3QibYs acnj94D1/tEwrnZZlcIIvtw+svRaEz9JzE8zfei3+VtvAhwaTBOH3ZNlDzqn7QHNQq mNb9rYcqrbjzQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 2D6C3688C5 for ; Mon, 1 Dec 2025 10:09:21 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608959; bh=XvBG194SjJZOKMN3I5GdsCpd8SzJYSIZjHadihhd1I4=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=jjPYCD2SJY0cGMKpcmfOG4IhrpdPPfYnBiBd4I0dAu4JsZ8u9lh3JWLvp6Og4W1A8 /y9PvMRatNkRnVb0UrSlgsP5aO/ePtOc9EKXv9GNplYbqjibPBN0vA4l61fadHMOIA eBBFkFzu9O07wGIUn/KmG8nexkNIEzYqjlfuAYo4tx90mYOWVVsiRf9FntqImzZE8B V+uPTkValTAZz3fNWDut5CVc0YbvHOZONeLF0aRoBX6DBrE0TaYjiQBo9bpinBpCd2 HKk1X/yzw/t3271XHUy0d0kAeIFQjgrsDugXp2QHzmCBaBaWCAfnS7HS7PEGeAa1vK l+Gyv9lrE2kNg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 8D2B8688A4; Mon, 1 Dec 2025 10:09:19 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id SFx9QcKu4JYw; Mon, 1 Dec 2025 10:09:19 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608953; bh=4Wrw3+62aBs/YE8tL0u43NYcaumnIDMUA91DIm7MaAg=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=fIKCUE4JyGQbgv7vREaRhU73cWgInc0ooblGlbd5gx3l7Usr+r4C211A8SAfzlzEv 9WWHkNZQPUeJkuFNkGgcadd4eYLnWAc5U3VIDNXgpTT0W7D6j6ULCNDTxWELGnHVCQ LMCdoVFA8n8UTKC9nPm4I6M9UuXfedK63QdWVUa3vFjHRRQm7YaJBk5c4jhWaN8HLB +t5mONMCq8ccSAHy4cN+wBamDa24acWRcleI+X9Ou5SrllwFcGeC+IPmvwtI8Q+qPa PTydnmA7kA4R4UhSF7AEh8/bQN53ufIGITrbf/y21DA3BNm9TfdRteR/u5nJT6M92O iPj+OOgon3wsw== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id BD7425E439; Mon, 1 Dec 2025 10:09:12 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:03 -0700 Message-ID: <20251201170529.3237986-25-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: HNNS4VYV3BWBDGY424JELV346VUD7OQR X-Message-ID-Hash: HNNS4VYV3BWBDGY424JELV346VUD7OQR X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Eugene Uriev , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 24/37] mcheck: integrate mcheck into dlmalloc.c List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add mcheck wrapper functions for dlmalloc, dlfree, dlrealloc, dlmemalign, and dlcalloc. When MCHECK_HEAP_PROTECTION is enabled, these wrappers call the mcheck hooks around the internal *_impl functions to provide heap corruption detection. Also add the mcheck() and mprobe() API functions. Changes from original commit: - Adapted for dlmalloc 2.8.6 function names (dl* instead of mALLOc) - Updated function signatures (void* instead of Void_t*) Signed-off-by: Eugene Uriev Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit 151493a875071448e2582489f6fa84d1630b3368) --- common/dlmalloc.c | 63 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 63 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 4f88e48f4b0..102b6c2bf8d 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -5591,6 +5591,69 @@ size_t dlmalloc_usable_size(const void* mem) { return 0; } +#ifdef MCHECK_HEAP_PROTECTION +#include "mcheck_core.inc.h" + +void *dlmalloc(size_t bytes) +{ + size_t fullsz = mcheck_alloc_prehook(bytes); + void *p = dlmalloc_impl(fullsz); + + if (!p) + return p; + return mcheck_alloc_posthook(p, bytes); +} + +void dlfree(void *mem) { dlfree_impl(mcheck_free_prehook(mem)); } + +void *dlrealloc(void *oldmem, size_t bytes) +{ + if (bytes == 0) { + if (oldmem) + dlfree(oldmem); + return NULL; + } + + if (oldmem == NULL) + return dlmalloc(bytes); + + void *p = mcheck_reallocfree_prehook(oldmem); + size_t newsz = mcheck_alloc_prehook(bytes); + + p = dlrealloc_impl(p, newsz); + if (!p) + return p; + return mcheck_alloc_noclean_posthook(p, bytes); +} + +void *dlmemalign(size_t alignment, size_t bytes) +{ + return NULL; +} + +/* dlpvalloc, dlvalloc redirect to dlmemalign, so they need no wrapping */ + +void *dlcalloc(size_t n, size_t elem_size) +{ + /* NB: no overflow check here */ + size_t fullsz = mcheck_alloc_prehook(n * elem_size); + void *p = dlcalloc_impl(1, fullsz); + + if (!p) + return p; + return mcheck_alloc_noclean_posthook(p, n * elem_size); +} + +/* mcheck API */ +int mcheck(mcheck_abortfunc_t f) +{ + mcheck_initialize(f, 0); + return 0; +} + +enum mcheck_status mprobe(void *__ptr) { return mcheck_mprobe(__ptr); } +#endif /* MCHECK_HEAP_PROTECTION */ + #endif /* !ONLY_MSPACES */ /* ----------------------------- user mspaces ---------------------------- */ From patchwork Mon Dec 1 17:05:04 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 800 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608961; bh=0Snx27rfuO93+mb3eHb+BT+n19dt1Xj4iuIRzbcdhLo=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=JkTu98U1svHV7IT75j3uVp0HFsj/cOV78hxjgYj2OFpZzIdoyGiNCsNEN1/GnDvkn cTKj4VW/hih5486/CwU0tO8G3WIXLRwhjP0ZG9cGwwckRI7RcKOr+KEkJ7JZLWV2oY Fpulv2Wc29YcObErNLmP9iq02Vmnq0+Ubfm+MI/NGX0JwX3xOy8nIgeD4uQ9XKb/1a XjlIVlp1GXcVgsZ39xjX604t25aIesKJMefxsB1PFhRR3gDvlmg3YEA6w4xvukfXd1 l55u2jE2pG2jBOAdtZ2LcfWCs3MngsDnVTu3xUbxi5lRS2vUf5BrXJhPXzx5Wi7zZn apZopg3SwTmXA== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id C388A688DD for ; Mon, 1 Dec 2025 10:09:21 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 0auj3vdLYI6R for ; Mon, 1 Dec 2025 10:09:21 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608961; bh=0Snx27rfuO93+mb3eHb+BT+n19dt1Xj4iuIRzbcdhLo=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=JkTu98U1svHV7IT75j3uVp0HFsj/cOV78hxjgYj2OFpZzIdoyGiNCsNEN1/GnDvkn cTKj4VW/hih5486/CwU0tO8G3WIXLRwhjP0ZG9cGwwckRI7RcKOr+KEkJ7JZLWV2oY Fpulv2Wc29YcObErNLmP9iq02Vmnq0+Ubfm+MI/NGX0JwX3xOy8nIgeD4uQ9XKb/1a XjlIVlp1GXcVgsZ39xjX604t25aIesKJMefxsB1PFhRR3gDvlmg3YEA6w4xvukfXd1 l55u2jE2pG2jBOAdtZ2LcfWCs3MngsDnVTu3xUbxi5lRS2vUf5BrXJhPXzx5Wi7zZn apZopg3SwTmXA== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id B1C8968821 for ; Mon, 1 Dec 2025 10:09:21 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608959; bh=tNPTa7+KmWfWOAiQ2beic9xECVZRGwyw5y1WE4Pce4U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LJch64uNNO7wr3ZpCym0JQcNau2b6bHqu51D67xSxue45EDGkG988ARu0fK19I75e LQiX5ndK2TIWwcj2PPwUQxvvW7Iwle1r3hrisF9a4POibp9xqAQqy/WFqbJpCWixhx GazMQDZHiC6vvD2EwpnvLPmB0GvmIsmkbhkzEWYfiaxAnzuGSzu+y7Y7O/FqXO1Ex4 hQtxz8jh6DB7PMxw63RgovvcfxtbGdnajjXfnagcoVTfuOqTaPZ/6ucWgFy8LL16Di VJVomaFstDCTrLNh4ZTyKFhOdr7rOqsXIYO2ocmQTTjD7P2LWjW7yvYS8UV2tWeLgz V5XsDpCms8aKg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 8D96E688AE; Mon, 1 Dec 2025 10:09:19 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id 1uqXeWTbtz-y; Mon, 1 Dec 2025 10:09:19 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608957; bh=8HHE2ieRyZ/jh0p/oVVpw43Oz1ja/zlFxSORp89sPjE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=tEh6DfBfDFvTJ0Br7YrJdZvvgY3MGsTLSxrVRTSWxkQ1H9Hqw+A0vNiI6FyyDg6YG sThDArytqt7x0Ksa7P8xU+cu+j+ZUAXhNqENYHdDkBa+irUoOrq36y6d1nwcqg0ivB GL5Yn1xpGnNF4z03/VH2WjhSXs+jEYAFbvS8pTvm5KthEfXbLERA0ZmS0bKzZu/DiU o460xlGdKpA0ziPE+39TKEVrd7Tp4gtmsgsJtdzr4wMO8yRZoW9XKHYfdHooiG7gTH 9ClLeJ7Kt61OExABwfGerN8mGpLcgB75x3jhBWWCCUJ4J8oPWniCLl+J1iXeHPCg6q DXFSiqnp4kvdA== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 73BAF688CB; Mon, 1 Dec 2025 10:09:17 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:04 -0700 Message-ID: <20251201170529.3237986-26-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: SKC5JJFBWFHK5K4RHTILJTI7H35IHVAM X-Message-ID-Hash: SKC5JJFBWFHK5K4RHTILJTI7H35IHVAM X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Eugene Uriev , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 25/37] mcheck: support memalign List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Implement the dlmemalign wrapper function for mcheck heap protection. Uses mcheck_memalign_prehook() and mcheck_memalign_posthook() to properly handle aligned allocations. Changes from original commit: - Uses dlmemalign/dlmemalign_impl instead of mEMALIGn/mEMALIGn_impl Signed-off-by: Eugene Uriev Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit ae838768d79cbb834c4a8a5f4810df373e58b622) --- common/dlmalloc.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 102b6c2bf8d..c9eb18787e8 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -5628,7 +5628,12 @@ void *dlrealloc(void *oldmem, size_t bytes) void *dlmemalign(size_t alignment, size_t bytes) { - return NULL; + size_t fullsz = mcheck_memalign_prehook(alignment, bytes); + void *p = dlmemalign_impl(alignment, fullsz); + + if (!p) + return p; + return mcheck_memalign_posthook(alignment, p, bytes); } /* dlpvalloc, dlvalloc redirect to dlmemalign, so they need no wrapping */ From patchwork Mon Dec 1 17:05:05 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 801 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608964; bh=7Mztku74IvkC0DxDv8l1bW2wLTxqfS07kwFk/v0cbug=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=IZDWpr56/a567g9JpNvDr4CvMi+t4u2Epd4l4pcQE40SXWqzxkZIBJqyitFthgc4x 1sdWBiSpqiLrENlQ/LR52wr/Hu7WTI/vG21I+5aTMUnFKwCNyzytoPWoMzGKLoToNo PtukFI6KreN6ypAaKmMkCReb2Bj99ECNAPKV+NQzykr1VnAzrfd66bOKxWiv8qIpmh MAE25swpIANojNwoZ3dEHTQgvENf89zztmJzgL1IxYp4HSgXv8V1E57g1QucnDMMFP uEWnROJEgkCTAI7zh9EF7ZwLblwBJGpp4EHQcoysV/hUVsNHu9u2sMU0xTTokwqocR WWtUQudx865+Q== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 16E76688A4 for ; Mon, 1 Dec 2025 10:09:24 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id UWj5Cjo17PrM for ; Mon, 1 Dec 2025 10:09:24 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608963; bh=7Mztku74IvkC0DxDv8l1bW2wLTxqfS07kwFk/v0cbug=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=G3FbE7pMTU+SM0UauBP74H5mt6EXPzIY56B6UPpoKQaznfmKW6AkmZe8Ij+ck5na6 eikbd3P9i32QOPfFLGrI839xjK9PmOqFleJv6dObfoUjue5e2Z2j4/jgwnNk2iQRbB Soqa44oKnGzvydAJ3I6/mIV9Ui/LT5bz2GYttHFMUKLKzrtHWdp1FC1dC+JJMVaYAj nioK9yfRJ86+NKPUBUSDwtMRyFMNeK0KcZ7hLqTfxnRQrQDE9gltg06vyCsBgh3h+X b+/Y40Tm2upI8sL8FHbnOXmMlu3Av2qHOQHW5Pc2NAMn1aU5FZ/EdfKy5saEJ7CZDM Ex3CqVMeAL1bg== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 0E2F9688DF for ; Mon, 1 Dec 2025 10:09:23 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608959; bh=MdPu0J/5Y0P+KuWhWbeKwdVZ6gk8RSDnaQ3K1vLRpxE=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=QYHfTt74YsuhRS14Cp4MJLBTCkQP23YEr9XR4NnAiIXWGSBQmmsDrO92iscmh8N/M +Jj3ogN1pZSy/IcyQpRug6fR/1ut0C2kWxtzNawmsSLC3qkcgWEpMWU4EexVsPhLhq SBWjy7gzTf5DG4w7bv+mFqMMeUNYj6zWWOFMJCb1mex17VXXtVRis+Dvg5iFYIXBVK OqLn3zNJ3DTXLgvOfJe8Yq3BVARgZYLoOJfk8quw7SOpVBX1gfo/8U39/7hakQV/Lh hrSeO3NsxA1teg4ulqSK/8smsqQMN/vaKgLjzxU1w49UcLePX/U86eV7bkXkvjKSvJ bdttuyATMtqJQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id A38E75E439; Mon, 1 Dec 2025 10:09:19 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id iQR6o6P7kmfO; Mon, 1 Dec 2025 10:09:19 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608958; bh=IkHVZ+WggIZ1Qd5F0RYwUSwdK6Muz1N6UcSmWhEHXZY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=cAS9wMIOmsAdJe1iwt0v2HpEIDei8KpqrNV119n2sVdGTVUM85dJtePBXTBzgcu8f 0wFcIWrz4V//NkHHghfSZM2wpPSwwcGrY/axFj/BAfxDfQBeRZmYbZvrQ9vlO7N0Sa aEmffLMh2tmHBUzYb6IbgNnLWj9LNarc7tItioMhA0InJ5m6LHexEaxvlYo53Ta1UW r74MyDTi7O8Ae/7x0spVbLhM9E3MMXh9YdTT70NCV/JfLUdk/Jur3AYygEmgwGznbP 8Rpy0ThgHBVt9+fH9lBibi1loV9cIxKbsCEqezoOUVYbLZTSuhKrtoQPY8i4RevZhr 3X71/6WtA/oXg== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 5403668821; Mon, 1 Dec 2025 10:09:18 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:05 -0700 Message-ID: <20251201170529.3237986-27-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: 4U46YW3ZC5ACY5Q4NZKQ3FGOKHJ4NZ4E X-Message-ID-Hash: 4U46YW3ZC5ACY5Q4NZKQ3FGOKHJ4NZ4E X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Eugene Uriev , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 26/37] mcheck: add pedantic mode support List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add mcheck_pedantic_prehook() calls to dlmalloc, dlrealloc, dlmemalign, and dlcalloc wrapper functions. Also add the mcheck_pedantic() and mcheck_check_all() API functions. The pedantic mode is runtime controlled, so the registry hooks are called on every allocation operation. Changes from original commit: - Uses dl* function names instead of mALLOc style names Signed-off-by: Eugene Uriev Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit 18c1bfafe0ccdd3229d91bbb07ed942e9f233f93) --- common/dlmalloc.c | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index c9eb18787e8..4ee7c6c133f 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -5596,6 +5596,7 @@ size_t dlmalloc_usable_size(const void* mem) { void *dlmalloc(size_t bytes) { + mcheck_pedantic_prehook(); size_t fullsz = mcheck_alloc_prehook(bytes); void *p = dlmalloc_impl(fullsz); @@ -5608,6 +5609,7 @@ void dlfree(void *mem) { dlfree_impl(mcheck_free_prehook(mem)); } void *dlrealloc(void *oldmem, size_t bytes) { + mcheck_pedantic_prehook(); if (bytes == 0) { if (oldmem) dlfree(oldmem); @@ -5628,6 +5630,7 @@ void *dlrealloc(void *oldmem, size_t bytes) void *dlmemalign(size_t alignment, size_t bytes) { + mcheck_pedantic_prehook(); size_t fullsz = mcheck_memalign_prehook(alignment, bytes); void *p = dlmemalign_impl(alignment, fullsz); @@ -5640,6 +5643,7 @@ void *dlmemalign(size_t alignment, size_t bytes) void *dlcalloc(size_t n, size_t elem_size) { + mcheck_pedantic_prehook(); /* NB: no overflow check here */ size_t fullsz = mcheck_alloc_prehook(n * elem_size); void *p = dlcalloc_impl(1, fullsz); @@ -5650,12 +5654,20 @@ void *dlcalloc(size_t n, size_t elem_size) } /* mcheck API */ +int mcheck_pedantic(mcheck_abortfunc_t f) +{ + mcheck_initialize(f, 1); + return 0; +} + int mcheck(mcheck_abortfunc_t f) { mcheck_initialize(f, 0); return 0; } +void mcheck_check_all(void) { mcheck_pedantic_check(); } + enum mcheck_status mprobe(void *__ptr) { return mcheck_mprobe(__ptr); } #endif /* MCHECK_HEAP_PROTECTION */ From patchwork Mon Dec 1 17:05:06 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 802 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608967; bh=za2AztBM4P+VoZ5S35IkM4NqMANyQzpspXMPE7tCt30=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=nKsX5Xv8up+8GP8Cpp8w5YBMCRyhxVspCRgOED5kOMQaeELcfs0kZ2sOOvJE96Bbf PMnVqJz5uajotp10E6NhhV8U3WWi7fQS+PtVfHBqJBV4sPCZkzqGrdR3i9+bSPfC3+ uVNK3zEcL1y3sTl9GMP/1Sno/1n5BL5QYWnncEBEpMmMD7DFsA3Vy2OS+7mln/2svR 9Z2CuaB6Xmo0ySE7JAcle/CEZmKfp4JeFU+rXXI9GQJw5//Y/m/2zgc65AUIUUlnyO w6tMQQ9z+fSXK6TnwMKzvJWdHjh6Tt1XwN06cSP7bwS24EfxYhPYM7zwvrpmOWoJ8r WxjwJJ99cYTDw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 8D9966888B for ; Mon, 1 Dec 2025 10:09:27 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id rxAf95pVnojk for ; Mon, 1 Dec 2025 10:09:27 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608967; bh=za2AztBM4P+VoZ5S35IkM4NqMANyQzpspXMPE7tCt30=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=nKsX5Xv8up+8GP8Cpp8w5YBMCRyhxVspCRgOED5kOMQaeELcfs0kZ2sOOvJE96Bbf PMnVqJz5uajotp10E6NhhV8U3WWi7fQS+PtVfHBqJBV4sPCZkzqGrdR3i9+bSPfC3+ uVNK3zEcL1y3sTl9GMP/1Sno/1n5BL5QYWnncEBEpMmMD7DFsA3Vy2OS+7mln/2svR 9Z2CuaB6Xmo0ySE7JAcle/CEZmKfp4JeFU+rXXI9GQJw5//Y/m/2zgc65AUIUUlnyO w6tMQQ9z+fSXK6TnwMKzvJWdHjh6Tt1XwN06cSP7bwS24EfxYhPYM7zwvrpmOWoJ8r WxjwJJ99cYTDw== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 7CD0A688AE for ; Mon, 1 Dec 2025 10:09:27 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608966; bh=UqEKmVMzV6+k06xedV5KaylljB/43yuNiBBm9vgD2/U=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qGVbQ876QEbdXF0o0wwHl2Da8j3FEehu6vpyAOxtY26lXEAbMfZpVWcZjKao3/xSL 2R1M2cXaAbkEu8u+2JBUnAnmLSDljriiKFB+mZlFcFrfIm1RtDMcorDSDsMubkZ6nA tswzBM/YWni3SWx0FIx8c3aPG/Qb+IdCbvuT44yqVMUM6+DrAIdQcM61kThy24k6dx pLPOJveHu+idlq8hT9Vt6GtRvOXzi4bVA/DwXdDTEepEpPFv3Ro6KaHTWym5FoG9wg hdbBmmbEUATZB8QIc6poW29l1RGOJ1qupQ4jKxPxR7QFB8YWy9uobdTI/ubsWnwY1F uU77gYg83nm/A== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 4C7CF687F3; Mon, 1 Dec 2025 10:09:26 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id nXsZIqkb8__h; Mon, 1 Dec 2025 10:09:26 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608959; bh=5cEwtQnj/+SW/L5Z9sEk6aeLNuePHmvfVkJVB9xydGw=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=G6ZOb2VoV9RRX1KW7CMnjbF1nNjqNnITPi9mxW0dRVJNXOw5dslH+SLWy7/QMT1dI VQYjOW4FHa70oJ7JKFduZJPIvagY6kOGctMiiMLpkAz+vG8pkXpi3kdXKWXkMBJKpl BaXPswoUvXvOoN/j4MQJz0cK4pdcZamnk7HWPDFhBTkC22KKvNpCT55Z4s78A+iEEi 9CK9ZQTB6lMmckaTRrSaUlsAkSXSbE75FMukwsgp/FFf1ynQJhALj8ShehixZ+olLF /kyW1UwEfEnYu+R3PWL5lNXK6yKYg8Q+TC4HX8F7etYcvWSrgtWmRbgc+yTm9xczBI 3pWqxkWyJMhxw== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 4A5346888B; Mon, 1 Dec 2025 10:09:19 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:06 -0700 Message-ID: <20251201170529.3237986-28-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: P7YRUJTEI7ODKLCUPBHVRAOFOVKNV2TW X-Message-ID-Hash: P7YRUJTEI7ODKLCUPBHVRAOFOVKNV2TW X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 27/37] malloc: Avoid defining calloc() List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass The C runtime calls malloc() before starting main(), e.g. to get some memory to use for dynamic linking. If CONFIG_TPL_SYS_MALLOC_SIMPLE is enabled, the calloc() symbol is defined within U-Boot. The C runtime may call that too. Add the SYS_MALLOC_SIMPLE section to the new malloc.h header to redirect malloc, realloc, calloc, and memalign to their simple implementations when SYS_MALLOC_SIMPLE is enabled. Add a COMPILING_DLMALLOC guard so that dlmalloc.c can include malloc.h without hitting the SYS_MALLOC_SIMPLE redirects, which would otherwise cause conflicts with the dlfree/free macro definitions. Changes from original commit: - Applied to new dlmalloc 2.8.6 malloc.h header structure Signed-off-by: Simon Glass Co-developed-by: Claude Signed-off-by: Simon Glass (cherry picked from commit 22f87ef53045c19df9a770c4101ed3ba744c1b35) --- include/malloc.h | 19 +++++++++++++++++++ 1 file changed, 19 insertions(+) diff --git a/include/malloc.h b/include/malloc.h index 72db7fdb507..f8f0dbb9b70 100644 --- a/include/malloc.h +++ b/include/malloc.h @@ -68,6 +68,24 @@ extern "C" { #define USE_DL_PREFIX #endif +/* + * When using simple malloc (SPL/TPL), redirect to simple implementations. + * Skip this when compiling dlmalloc.c itself to avoid conflicts. + */ +#if CONFIG_IS_ENABLED(SYS_MALLOC_SIMPLE) +#define malloc malloc_simple +#define realloc realloc_simple +#define calloc calloc_simple +#define memalign memalign_simple +#if IS_ENABLED(CONFIG_VALGRIND) +#define free free_simple +#else +static inline void free(void *ptr) {} +#endif +void *calloc(size_t nmemb, size_t size); +void *realloc_simple(void *ptr, size_t size); +#else /* !SYS_MALLOC_SIMPLE || COMPILING_DLMALLOC */ + #ifndef USE_DL_PREFIX #define dlcalloc calloc #define dlfree free @@ -106,6 +124,7 @@ extern "C" { #define malloc_stats() dlmalloc_stats() #define malloc_usable_size(p) dlmalloc_usable_size(p) #endif /* USE_DL_PREFIX */ +#endif /* !SYS_MALLOC_SIMPLE || COMPILING_DLMALLOC */ #if !NO_MALLINFO #ifndef HAVE_USR_INCLUDE_MALLOC_H From patchwork Mon Dec 1 17:05:07 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 803 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608972; bh=IL95d/Q3avWTvYDpNvvVmIorrnjxeg4x9qNbqERr5JI=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=klQIY8rAkvzzGyojfFoTDg1ArZDE/9UQveCdONCm4gEVyPTqRE6onuqwAxdzHvXSu 5EJiGaC0hygUvPagtjp+uXjzBIq9PIjgE9/S7HUiJAqaYIe7FWNBQ7ENpMoe4rebga lIWjud8WcuwdaJCBVhG/12JsbYRAJP/4Vgp7oPGyGBCbNR576RQGB9RD9j8v7yTnhD sFUg7KCS4wSKZGTRI6BqEJeIE7dXkbWkgCXL6SDzsTVf3qJ4rBKvIJGtFeFLvyv50I Vij9dw7gzDw8i9S9WKDgZt+15LcxWrqIMpAFKSx46NP+MKP91vO02HApByA7GdkVmv EEK94bnQOTYRw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 7B4B1688CD for ; Mon, 1 Dec 2025 10:09:32 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id ZHbcep6uFlGy for ; Mon, 1 Dec 2025 10:09:32 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608972; bh=IL95d/Q3avWTvYDpNvvVmIorrnjxeg4x9qNbqERr5JI=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=klQIY8rAkvzzGyojfFoTDg1ArZDE/9UQveCdONCm4gEVyPTqRE6onuqwAxdzHvXSu 5EJiGaC0hygUvPagtjp+uXjzBIq9PIjgE9/S7HUiJAqaYIe7FWNBQ7ENpMoe4rebga lIWjud8WcuwdaJCBVhG/12JsbYRAJP/4Vgp7oPGyGBCbNR576RQGB9RD9j8v7yTnhD sFUg7KCS4wSKZGTRI6BqEJeIE7dXkbWkgCXL6SDzsTVf3qJ4rBKvIJGtFeFLvyv50I Vij9dw7gzDw8i9S9WKDgZt+15LcxWrqIMpAFKSx46NP+MKP91vO02HApByA7GdkVmv EEK94bnQOTYRw== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 663C5688AE for ; Mon, 1 Dec 2025 10:09:32 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608969; bh=2/Z8QAfr2hcKSJUv3CMr/I100Z0C8ay2tJyqpYWbzd8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=IokWO2s7VELiVgT/t6aEDAoedL4ANYOE5LBBXsZgwbaqP75ijXRT4nNQ8Y6JwIwBH gxBZCVdP1sayw2BhaOZ+upJmKK7A1PGtrEO444kR9GGxnlcOhaR4ZWldCsfCqmmp1f DI8atGONub8WLMJ1s3qxscDfGuKLOPeeJWWG7TXmZmwSBya5f9YYpGCFejWLR0pDoO KWhhynrZNTuIB9YKkXpbZiy1A1Tuo9dfbFUq85KS7iVpAgG4OvWqvXkQXgGKrDdBfD I0IredLznhtqMjGFAC9fQIoMg7h90XtLIwGNGlnXLBoKCD+rEgYK3O4LR385fKRjGV E9K1uqnxkBk0A== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id E6439688C5; Mon, 1 Dec 2025 10:09:29 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id adA2AGBEr-Ne; Mon, 1 Dec 2025 10:09:29 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608964; bh=P98K1k5ZoAO2AxvCAgipwU3toGCwYztaQ50MO0fdH3c=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=guIcF+IVMUD2/WsNdwnzlwdAiAMw6b9BX+33EeT1nkYp6+q8+mJbp/9ZIigWvjXFb 7Xy8v4ZH8cKDYBoNCriQWUoSioM7fUPYF+cVFcPJVIx4g+RDtNxcUe8wwoQGWp3qSn 7nAXdXtI5w61gHh5FVFcaCz/TuiaLgIzRg5cHdlhRHANb03tFYsmoPdLyva9NpS7PL 7dW3Kdc9wSkRPSUOXnVreiRkCK3Z1MnizhIHyL5E8W0cK+9q+GxiDlDtppB58IKxtb phEy9GIi63VYgxlOoVKWwqkZQlEZWL7RPUGSVbgOPRh+Fs2CJwBViJUbjvioPFWaDK l/EAGfZPMUDqg== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 059C0688CD; Mon, 1 Dec 2025 10:09:23 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:07 -0700 Message-ID: <20251201170529.3237986-29-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: C56PEDSV6EN46ZCJ66Q3ZZ7W5OOVJRY6 X-Message-ID-Hash: C56PEDSV6EN46ZCJ66Q3ZZ7W5OOVJRY6 X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 28/37] malloc: Set up the malloc() state in mem_malloc_init() List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Move the malloc state initialisation from lazy init in sys_alloc() to explicit init in mem_malloc_init(). This allows is_initialized() to always return true for U-Boot, eliminating runtime checks. The initialisation sets up: - least_addr, seg.base, seg.size, seg.sflags - magic, release_checks, mflags - bins (via init_bins) - top chunk (via init_top) - footprint tracking Add a simplified sys_alloc() for small builds. Since the heap is pre-allocated with fixed size, sys_alloc() only needs to extend via sbrk() if space remains: no mmap, no multiple segments, no comple merging. The helper functions mmap_alloc(), mmap_resize(), prepend_alloc() and add_segment() are thus compiled out for non-sandbox builds. This is controlled by a new SIMPLE_SYSALLOC option, which is the default. Sandbox retains full functionality for testing. With this, the new dlmalloc is approximately at parity with the old one, e.g. about 400 bytes less code on Thumb2 (firefly-rk3288). There is a strange case here with a small number of boards which set up the full malloc() when CONFIG_SYS_MALLOC_SIMPLE is enabled. This cannot work. With CONFIG_SPL_SYS_MALLOC_SIMPLE, all malloc()/free()/realloc() calls are redirected to simple implementations via macros in thhe malloc.h header. In this case, mem_malloc_init() doesn't need to init the dlmalloc state structure (gm) since it will never be used. Initing _gm_ pulls in the entire malloc_state BSS structure (~472 bytes) plus initialisation code (~128 bytes), adding ~600 bytes to SPL on boards that use full malloc (K3 platforms with CONFIG_K3_LOAD_SYSFW). Skip the _gm_ init when SYS_MALLOC_SIMPLE is enabled. These boards call mem_malloc_init() even though it will have no effect: $ ./tools/qconfig.py -f CONFIG_K3_LOAD_SYSFW SPL_SYS_MALLOC_SIMPLE -l am62ax_evm_r5 am62px_evm_r5 am62x_beagleplay_r5 am62x_evm_r5 am62x_evm_r5_ethboot am62x_lpsk_r5 am64x_evm_r5 am68_sk_r5 am69_sk_r5 j721s2_evm_r5 j722s_evm_r5 j784s4_evm_r5 phycore_am62x_r5 phycore_am62x_r5_usbdfu phycore_am64x_r5 verdin-am62_r5 Co-developed-by: Claude Signed-off-by: Simon Glass --- common/dlmalloc.c | 88 +++++++++++++++++++++++++++++++++++++++++++++-- 1 file changed, 86 insertions(+), 2 deletions(-) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 4ee7c6c133f..9330848d059 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -595,6 +595,11 @@ static inline void MALLOC_COPY(void *dest, const void *src, size_t sz) { memcpy( #define INSECURE 1 #endif +/* Use simplified sys_alloc for non-sandbox builds */ +#if !IS_ENABLED(CONFIG_SANDBOX) +#define SIMPLE_SYSALLOC 1 +#endif + #define MALLOC_FAILURE_ACTION #define ABORT do {} while (1) @@ -2719,7 +2724,12 @@ static struct malloc_state _gm_; #endif /* !ONLY_MSPACES */ +#if defined(__UBOOT__) && SIMPLE_SYSALLOC +/* U-Boot initializes in mem_malloc_init() so is_initialized() is always true */ +#define is_initialized(M) 1 +#else #define is_initialized(M) ((M)->top != 0) +#endif /* -------------------------- system alloc setup ------------------------- */ @@ -3903,6 +3913,7 @@ static void internal_malloc_stats(mstate m) { requirements (especially in memalign). */ +#if !defined(__UBOOT__) || !SIMPLE_SYSALLOC /* Malloc using mmap */ static void* mmap_alloc(mstate m, size_t nb) { size_t mmsize = mmap_align(nb + SIX_SIZE_T_SIZES + CHUNK_ALIGN_MASK); @@ -3934,7 +3945,9 @@ static void* mmap_alloc(mstate m, size_t nb) { } return 0; } +#endif /* !defined(__UBOOT__) || !SIMPLE_SYSALLOC */ +#if !defined(__UBOOT__) || !NO_REALLOC_IN_PLACE /* Realloc using mmap */ static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) { size_t oldsize = chunksize(oldp); @@ -3969,12 +3982,13 @@ static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) { } return 0; } +#endif /* !NO_REALLOC_IN_PLACE */ /* -------------------------- mspace management -------------------------- */ /* Initialize top chunk and its size */ -static void init_top(mstate m, mchunkptr p, size_t psize) { +static void __maybe_unused init_top(mstate m, mchunkptr p, size_t psize) { /* Ensure alignment */ size_t offset = align_offset(chunk2mem(p)); p = (mchunkptr)((char*)p + offset); @@ -3989,7 +4003,7 @@ static void init_top(mstate m, mchunkptr p, size_t psize) { } /* Initialize bins for a new mstate that is otherwise zeroed out */ -static void init_bins(mstate m) { +static void __maybe_unused init_bins(mstate m) { /* Establish circular links for smallbins */ bindex_t i; for (i = 0; i < NSMALLBINS; ++i) { @@ -4017,6 +4031,7 @@ static void reset_on_error(mstate m) { } #endif /* PROCEED_ON_ERROR */ +#if !defined(__UBOOT__) || !SIMPLE_SYSALLOC /* Allocate chunk and prepend remainder with chunk in successor base. */ static void* prepend_alloc(mstate m, char* newbase, char* oldbase, size_t nb) { @@ -4111,9 +4126,62 @@ static void add_segment(mstate m, char* tbase, size_t tsize, flag_t mmapped) { check_top_chunk(m, m->top); } +#endif /* !__UBOOT__ || !SIMPLE_SYSALLOC */ /* -------------------------- System allocation -------------------------- */ +#if defined(__UBOOT__) && SIMPLE_SYSALLOC +/* + * U-Boot simplified sys_alloc: The heap is pre-allocated with fixed size in + * mem_malloc_init(), so we can only extend via sbrk() if space remains. + * No mmap, no multiple segments, no complex merging needed. + */ +static void* sys_alloc(mstate m, size_t nb) { + char* tbase; + size_t asize; + size_t tsize; + + asize = granularity_align(nb + SYS_ALLOC_PADDING); + if (asize <= nb) + return NULL; /* wraparound */ + + tbase = (char *)CALL_MORECORE(asize); + if (tbase == CMFAIL) { + MALLOC_FAILURE_ACTION; + return NULL; + } + tsize = asize; + + m->footprint += tsize; + if (m->footprint > m->max_footprint) + m->max_footprint = m->footprint; + + /* Extend the top chunk - sbrk returns contiguous memory */ + if (tbase == m->seg.base + m->seg.size) { + m->seg.size += tsize; + init_top(m, m->top, m->topsize + tsize); + } else { + /* Non-contiguous - shouldn't happen with U-Boot's simple sbrk */ + return NULL; + } + + if (nb < m->topsize) { + size_t rsize = m->topsize -= nb; + mchunkptr p = m->top; + mchunkptr r = m->top = chunk_plus_offset(p, nb); + r->head = rsize | PINUSE_BIT; + set_size_and_pinuse_of_inuse_chunk(m, p, nb); + check_top_chunk(m, m->top); + check_malloced_chunk(m, chunk2mem(p), nb); + return chunk2mem(p); + } + + MALLOC_FAILURE_ACTION; + return NULL; +} + +#else /* !__UBOOT__ || !SIMPLE_SYSALLOC */ + /* Get memory from system using MORECORE or MMAP */ static void* sys_alloc(mstate m, size_t nb) { char* tbase = CMFAIL; @@ -4322,6 +4390,7 @@ static void* sys_alloc(mstate m, size_t nb) { MALLOC_FAILURE_ACTION; return 0; } +#endif /* !__UBOOT__ || !SIMPLE_SYSALLOC */ /* ----------------------- system deallocation -------------------------- */ @@ -6601,6 +6670,21 @@ void mem_malloc_init(ulong start, ulong size) #if CONFIG_IS_ENABLED(SYS_MALLOC_CLEAR_ON_INIT) memset((void *)mem_malloc_start, '\0', size); #endif + +#if !CONFIG_IS_ENABLED(SYS_MALLOC_SIMPLE) + /* Initialize the malloc state so is_initialized() is true */ + gm->least_addr = (char *)mem_malloc_start; + gm->seg.base = (char *)mem_malloc_start; + gm->seg.size = size; + gm->seg.sflags = 0; /* not mmapped */ + gm->magic = mparams.magic; + gm->release_checks = MAX_RELEASE_CHECK_RATE; + gm->mflags = mparams.default_mflags; + init_bins(gm); + init_top(gm, (mchunkptr)mem_malloc_start, size - TOP_FOOT_SIZE); + gm->footprint = size; + gm->max_footprint = size; +#endif } void malloc_enable_testing(int max_allocs) From patchwork Mon Dec 1 17:05:08 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 804 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608976; bh=/Uq/h3ABGJ43VhrKvYE5rYx2zIEZEfGGlEF42C7Ia6o=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=Izaj8JuII6fBY6CmFMi4HjnVj1SQxwSKv498LO1IsYKlymgadGCCJuARJ1FFN4+a2 1bck0ZgvzQtECtIMXA1nCqsYUGqRXRBqeiuMG6mY1UzBkCZfdnkWORDxXrsbHt7eHi iTGXNO8u2G/qtiB26YcZ7E5MMbfZeA/X3G4P1OJGHzePHpKyFYfBXI/Xgl//7Kl4YB 7g96D99mL7mBUTWCpV1sZHHul7Urf0JfNFLkepAoxDo0RbW448S0GTkHEzPPWvPBFY udljzHJj/UlB1bM4v9sO0P/5EtoRRwWyYeOoTaphsHaTChvBeWm0MWM6gkTPXooH0L 0SbqDuV7BPHtg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id DED95688CD for ; Mon, 1 Dec 2025 10:09:36 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 5a_-ZJEqbh0t for ; Mon, 1 Dec 2025 10:09:36 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608976; bh=/Uq/h3ABGJ43VhrKvYE5rYx2zIEZEfGGlEF42C7Ia6o=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=Izaj8JuII6fBY6CmFMi4HjnVj1SQxwSKv498LO1IsYKlymgadGCCJuARJ1FFN4+a2 1bck0ZgvzQtECtIMXA1nCqsYUGqRXRBqeiuMG6mY1UzBkCZfdnkWORDxXrsbHt7eHi iTGXNO8u2G/qtiB26YcZ7E5MMbfZeA/X3G4P1OJGHzePHpKyFYfBXI/Xgl//7Kl4YB 7g96D99mL7mBUTWCpV1sZHHul7Urf0JfNFLkepAoxDo0RbW448S0GTkHEzPPWvPBFY udljzHJj/UlB1bM4v9sO0P/5EtoRRwWyYeOoTaphsHaTChvBeWm0MWM6gkTPXooH0L 0SbqDuV7BPHtg== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id CF1AF688B6 for ; Mon, 1 Dec 2025 10:09:36 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608974; bh=dBv8CAMn2/hNejBEz43uz2BjFWfzjMaLBZ5Jbd2xyXk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eWE9LXBP9LD6DMlQWeOjqZn/gh6YXAE2Bdtc5Nw3u6KAyJr371KPgCoEinSVsFuHS i1dB+95D2iq8HGAuV9QDM790ztm7OYQ534s5fNwv3ewzuwJzm511PQt4jTyAIAWWYc PATNFeItCGH6EejMjfTTOYdSpMRsgEawhqKd1SR0eEzwOIUFpcvF4XbxNpUQ84xlKs 61tpvMs5b0aT1b9a0rlMDnWJROquATVrRI2OxZc8jsTOfXeEhNky+aWVoaqI79H4LO yryzsj9PwUQm7u0ajo/9ef02LPURkzcszb4LIVwlRQ0V3WGpzDiDyx9HVKxo4cAJhs hDT4iXR0nludQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id D56AF687F3; Mon, 1 Dec 2025 10:09:34 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id yaE2fvzuw8My; Mon, 1 Dec 2025 10:09:34 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608968; bh=s8u+TjIG0pP2bnjGtkvuC4rvtxd74j40UsdlT9vtyQA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=LIIJOsiAMFppvL7olaQ7nOiqqQHWnZX+3S9qcIdAaYdQVVeTXfKSpd3jDZfXV96Qx VfeYZJ6zg5s0xrASxMZWlKO6fGH2OgW7c5/k23FxFeNq21lgCrKBF8fI5yJ5AztA9w vR4Wo93WWMep8DImXItx7Ja49NSG09f2HwR/fJFivEhW5o/5q1+UGSYObDjOQ5sx5z 3EjuTlCQwKtHpbJlksFgsSaTV8o4MvGSOgMTly9Nt7u8IYib1TYNvckXgx6JZzYvET qFtLNvtRo/HVZwc8HEOtz9AetgWszuNMv4QazIj6MLBbzZdNDJXm5wuAl0A1IVThYW s88rH4ny5uFow== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 8959F6888B; Mon, 1 Dec 2025 10:09:28 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:08 -0700 Message-ID: <20251201170529.3237986-30-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: BMUJHUHMPFXBGLO3E6G63MOSGEIPNRPC X-Message-ID-Hash: BMUJHUHMPFXBGLO3E6G63MOSGEIPNRPC X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 29/37] malloc: Allow building dlmalloc with SPL_SYS_MALLOC_SIMPLE List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass When building boards that use CONFIG_SPL_SYS_MALLOC_SIMPLE (like qemu-x86_64), we need to avoid a conflict between the stub free() function defined by malloc and the real free() defined by dlmalloc.c Fix this by define COMPILING_DLMALLOC in dlmalloc.c before including malloc.h and adding a guard to the latter. Co-developed-by: Claude Signed-off-by: Simon Glass --- common/dlmalloc.c | 2 ++ include/malloc.h | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 9330848d059..869473b2bd1 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -603,6 +603,8 @@ static inline void MALLOC_COPY(void *dest, const void *src, size_t sz) { memcpy( #define MALLOC_FAILURE_ACTION #define ABORT do {} while (1) +#define COMPILING_DLMALLOC + #include #include #include diff --git a/include/malloc.h b/include/malloc.h index f8f0dbb9b70..997651e5c9c 100644 --- a/include/malloc.h +++ b/include/malloc.h @@ -72,7 +72,7 @@ extern "C" { * When using simple malloc (SPL/TPL), redirect to simple implementations. * Skip this when compiling dlmalloc.c itself to avoid conflicts. */ -#if CONFIG_IS_ENABLED(SYS_MALLOC_SIMPLE) +#if CONFIG_IS_ENABLED(SYS_MALLOC_SIMPLE) && !defined(COMPILING_DLMALLOC) #define malloc malloc_simple #define realloc realloc_simple #define calloc calloc_simple From patchwork Mon Dec 1 17:05:09 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 805 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608981; bh=2YTxqLji9d/utAwbdwJB6VHwxGXCZxLfjjUzJ7NDTtc=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=eFkO8sZxi+pwqRB2vpz6JRxyl58STK8XgVOuaBHCPz71GmwwPM9dx7I+WPSMyxdIf fxU9szHl0HasP4i57npovtpC6WUM24tofH6+QuI8zcz5PO93x3bduSRSLKDL6/dh5c oEWcS/OU3Iz1pd6tAD/s2yGaKZ884+g/+fuRHg0/dwpHuN82Y2GfwRrQ8zpPOp8M0J 6C5WvONoaZiCFG8GxVcuw4GTq2kXMINcUubjKhAGxKUgR+4g3jWtq7EYO6TkG9Lsaq IqPUZSALGovv8GJltM3YjP3WX0NrcwcG30EWrjts5q73QEnjgPLWQSE2PYf1c9TOyD hbvfHmCPFYtww== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 81BE5688AE for ; Mon, 1 Dec 2025 10:09:41 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id IvB45m3MeZtL for ; Mon, 1 Dec 2025 10:09:41 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608981; bh=2YTxqLji9d/utAwbdwJB6VHwxGXCZxLfjjUzJ7NDTtc=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=eFkO8sZxi+pwqRB2vpz6JRxyl58STK8XgVOuaBHCPz71GmwwPM9dx7I+WPSMyxdIf fxU9szHl0HasP4i57npovtpC6WUM24tofH6+QuI8zcz5PO93x3bduSRSLKDL6/dh5c oEWcS/OU3Iz1pd6tAD/s2yGaKZ884+g/+fuRHg0/dwpHuN82Y2GfwRrQ8zpPOp8M0J 6C5WvONoaZiCFG8GxVcuw4GTq2kXMINcUubjKhAGxKUgR+4g3jWtq7EYO6TkG9Lsaq IqPUZSALGovv8GJltM3YjP3WX0NrcwcG30EWrjts5q73QEnjgPLWQSE2PYf1c9TOyD hbvfHmCPFYtww== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 6FB54688B6 for ; Mon, 1 Dec 2025 10:09:41 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608979; bh=fs1xWKmGF5YzEiTrS1Lw5yq8Qx4NwSWoXxXMR4LKSKI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=v1fnSQX0C2DANcDIHsZalPLxVln6D9ry2p4Fe7IdC7Ssj6F4KsEWKapyGn1/P38ZO ZjeKCk9orSseZz+k2GtsPIdQ9vdiAxBCdUoVtJBY+F51M1QNGRL9FG0pleAOGqk5hq 5grCIQ4rwsr7SUpRTariPp6Fd9fDlRnRXbp5hd5VxulLyZHhr1TuQ4kVJEs/d46RxY jXt5+Mj6ixSdJtbmMgILx6mc3glbYB6gv+IOYry6BK58C0t+yhuJAFW4xfYYwuuyYA lCzeUB1kvLZtAKKBYSAdaweGZaOi2eRgfpalfAFbcstBM7eG3bUMaCJwLWa62x9IzS uml7HsITaNkjg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 4B5C66888B; Mon, 1 Dec 2025 10:09:39 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id 67dexbwuiAWu; Mon, 1 Dec 2025 10:09:39 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608973; bh=W+z5D1NrWdPRVJZTRpg1pPiBbiY79YZCs6ItD9yeMtc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=hTj/iqk9Ayaf/CjMY1lKQNB1TSYgS8LgRXhjyw6cGdPT3nIgxz+JRkGzAoeiLPyrn dfGG+wKRD1iiuExlf6dfQDlz4JBOqGy9ESy8L2/OJUnwHQqzo5VMPLIZLAw/r3Yoe+ 1naF+ka+ThiqmvHE54Nm+AsH+GEUWEMVfx6aW/MZUFlopV/mIXokC6sEEs9bPfBopZ XAwcIkAvQ66xgO6Vt0fivB3iuRF3phl5KWEeChxd0Nc04BDz6OJTBGcH4RsumvnTui nEyfbyLdEWcL8LN0ztqloBO25CNy5pKY0W4qHe1+ALzWuU5W+z1hAwTFJh8dQ/oZIh KU2Z8m+jj6V2A== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 308CE688AE; Mon, 1 Dec 2025 10:09:33 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:09 -0700 Message-ID: <20251201170529.3237986-31-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: JIARNWKNANWOGOGO77JE27SMN4XKKWEV X-Message-ID-Hash: JIARNWKNANWOGOGO77JE27SMN4XKKWEV X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 30/37] malloc: Add a way to control the size of dlmalloc List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass There are quite a few options available which can reduce code size. Most of them only make sense in SPL. Add a Kconfig option to enable a smaller dlmalloc for U-Boot proper and SPL. Signed-off-by: Simon Glass --- Kconfig | 47 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 47 insertions(+) diff --git a/Kconfig b/Kconfig index 357f027cc97..c4a65597035 100644 --- a/Kconfig +++ b/Kconfig @@ -464,6 +464,53 @@ config SYS_MALLOC_DEFAULT_TO_INIT If such a scenario is sought choose yes. +config SYS_MALLOC_SMALL + bool "Optimise malloc for code size" + help + Enable code-size optimisations for dlmalloc. This: + + - Disables tree bins for allocations >= 256 bytes, using simple + linked-list bins instead. This changes large-allocation performance + from O(log n) to O(n) but saves ~1.5-2KB. + + - Simplifies memalign() by removing fallback retry logic that attempts + multiple allocation strategies when initial over-allocation fails. + This saves ~100-150 bytes. + + - Disables in-place realloc optimisation, which resizes allocations + without copying if space permits. This saves ~200 bytes. + + - Uses static malloc parameters instead of runtime-configurable ones. + + These optimisations may increase fragmentation and reduce performance + for workloads with many large or aligned allocations, but are suitable + for most U-Boot use cases where code size is more important. + + If unsure, say N. + +config SPL_SYS_MALLOC_SMALL + bool "Optimise malloc for code size in SPL" + depends on SPL && !SPL_SYS_MALLOC_SIMPLE + default y + help + Enable code-size optimisations for dlmalloc in SPL. This: + + - Disables tree bins for allocations >= 256 bytes, using simple + linked-list bins instead. This changes large-allocation performance + from O(log n) to O(n) but saves ~1.5-2KB. + + - Simplifies memalign() by removing fallback retry logic. This saves + ~100-150 bytes. + + - Disables in-place realloc optimisation. This saves ~200 bytes. + + - Uses static malloc parameters instead of runtime-configurable ones. + + SPL typically has predictable memory usage where these optimisations + have minimal impact, making the code size savings worthwhile. + + If unsure, say Y to minimize SPL code size. + config TOOLS_DEBUG bool "Enable debug information for tools" help From patchwork Mon Dec 1 17:05:10 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 806 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608982; bh=cq3oEjpO/nzn0nFy4O/TiogdOh/JxeQAVuyuby9AQao=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=kIucHXnLoi/RksUWAFOQ0LKHAPV84dHGdBhmDO/hQhWpmN94oapewC24jFYwqjXn+ E1rwwERTSKlqFMo/xQF8nofR7xfy4+/qoxW+mPx7JyiUG+LDj1RHDo9k/yq3hsRFiZ 7/OGhknXNmCa9/N7JtFS0O1UFO5deqHeTcJ8L0Hp4onQCgS/Uz/1NTqEnOB5zUdc1Y bzvL7GrcTMP5p/1y+KRF1fGyKrwoqmPmKqy3jh19ZSF1qtiCHDE/BTfT/EdidBKAUp CQQnkBoRdNWRrK8a5QXymRzlT2A54KsO5duxryuoJRKmc1Ntz44PlgDMEYoZilSdRP LmtefVmyzVLYA== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 15F7D688E5 for ; Mon, 1 Dec 2025 10:09:42 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id piedT2mgJlpR for ; Mon, 1 Dec 2025 10:09:42 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608982; bh=cq3oEjpO/nzn0nFy4O/TiogdOh/JxeQAVuyuby9AQao=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=kIucHXnLoi/RksUWAFOQ0LKHAPV84dHGdBhmDO/hQhWpmN94oapewC24jFYwqjXn+ E1rwwERTSKlqFMo/xQF8nofR7xfy4+/qoxW+mPx7JyiUG+LDj1RHDo9k/yq3hsRFiZ 7/OGhknXNmCa9/N7JtFS0O1UFO5deqHeTcJ8L0Hp4onQCgS/Uz/1NTqEnOB5zUdc1Y bzvL7GrcTMP5p/1y+KRF1fGyKrwoqmPmKqy3jh19ZSF1qtiCHDE/BTfT/EdidBKAUp CQQnkBoRdNWRrK8a5QXymRzlT2A54KsO5duxryuoJRKmc1Ntz44PlgDMEYoZilSdRP LmtefVmyzVLYA== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 01269687F3 for ; Mon, 1 Dec 2025 10:09:42 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608979; bh=uzD9Ks9VTYJTIdRoVcxX+i9ybtb9vyhdqLSig5gGLC8=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=wEJ9CrjfpDb7TUw+G0/xqqdz55fEUELGhgWYjLl7oGNq1JMheK4R5XQwsZfgQsecb yuFNGOyWZyNGbohu9HMZ+uGqHI/Ib22ZpE8krhiSXAKA9BIutZgPu58QNUeQbGCu1U CcyXD3U9Bx35QB168GF1//KcAiTAiCU5Bk7hAv1siYFFvGXPo+/pBxts3r2FtACMFA xCpMvc8LpdjqIX/dao7GDotueDHWdfj31XAhFQAmco0Xto2ZNFSxf53ZH3dns+0YrK R6dxh4qdBMrjCd5/L6XFwgxKxPddShtBdjOo/WvBwBuiv5+eAHMIhL7My2i8Qy+j6W gNsnGABEWY2jg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 4C172688A4; Mon, 1 Dec 2025 10:09:39 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id M1IGPaUA1dKe; Mon, 1 Dec 2025 10:09:39 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608978; bh=tl4xJ7hgFAp03DI7dDETyHQYqtxoxdaXIa2mnC/nTFU=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=eh9B6k7Z3fCdZBfM77bYs2n/y0cbuX/9i5gl4eVPikLM/nU56eiWIw9j/2UpXzyzu XS6LW2yBWfmmzjgyUieyJhZ0wk+8W9GGJYybFdv+XDUuFsbMmDePndVaX5ibnWAAI1 jL5pqu1dj+uYqcD8g8RfZIFKvTtE9dmlKC/YxIqFgc8NxzxBy0Ba6kzOpH9x8BBQVy TiqXZAaZGq9/BWaMDnRTDUEqhAotCUUsU1hw0Zd88vUOtjJOboMQzLRycBC1k7wtRU I+qTSNalJyd/IqhZEIB0/GYOfeEeRyyCFUvAEb7CQtRx5u903YS2xDNOBx6U/bEnmL Q7FB7vsPEXl7Q== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id C1FC4688CB; Mon, 1 Dec 2025 10:09:37 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:10 -0700 Message-ID: <20251201170529.3237986-32-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: HXANCSAFQMKWY7RRWTBETX56IYDVGOX5 X-Message-ID-Hash: HXANCSAFQMKWY7RRWTBETX56IYDVGOX5 X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 31/37] malloc: Add NO_REALLOC_IN_PLACE option to reduce code size List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add a new NO_REALLOC_IN_PLACE option that disables in-place realloc optimization. When enabled via CONFIG_SYS_MALLOC_SMALL, realloc() always allocates new memory, copies data, and frees the old block instead of trying to extend the existing allocation. This saves about 500 bytes by eliminating try_realloc_chunk() and mmap_resize() functions. When unit tests are enabled, the extra realloc logic is included. Co-developed-by: Claude Signed-off-by: Simon Glass --- common/dlmalloc.c | 28 ++++++++++++++++++++++++++-- 1 file changed, 26 insertions(+), 2 deletions(-) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 869473b2bd1..4439d278188 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -595,6 +595,10 @@ static inline void MALLOC_COPY(void *dest, const void *src, size_t sz) { memcpy( #define INSECURE 1 #endif +#if CONFIG_IS_ENABLED(SYS_MALLOC_SMALL) +#define NO_REALLOC_IN_PLACE 1 +#endif + /* Use simplified sys_alloc for non-sandbox builds */ #if !IS_ENABLED(CONFIG_SANDBOX) #define SIMPLE_SYSALLOC 1 @@ -807,6 +811,9 @@ ulong mem_malloc_brk; #ifndef NO_SEGMENT_TRAVERSAL #define NO_SEGMENT_TRAVERSAL 0 #endif /* NO_SEGMENT_TRAVERSAL */ +#ifndef NO_REALLOC_IN_PLACE +#define NO_REALLOC_IN_PLACE 0 +#endif /* NO_REALLOC_IN_PLACE */ /* mallopt tuning options. SVID/XPG defines four standard parameter @@ -3984,7 +3991,7 @@ static mchunkptr mmap_resize(mstate m, mchunkptr oldp, size_t nb, int flags) { } return 0; } -#endif /* !NO_REALLOC_IN_PLACE */ +#endif /* !defined(__UBOOT__) || !NO_REALLOC_IN_PLACE */ /* -------------------------- mspace management -------------------------- */ @@ -5002,6 +5009,7 @@ void* dlcalloc_impl(size_t n_elements, size_t elem_size) { /* ------------ Internal support for realloc, memalign, etc -------------- */ +#if !defined(__UBOOT__) || !NO_REALLOC_IN_PLACE /* Try to realloc; only in-place unless can_move true */ static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb, int can_move) { @@ -5081,6 +5089,7 @@ static mchunkptr try_realloc_chunk(mstate m, mchunkptr p, size_t nb, } return newp; } +#endif /* !defined(__UBOOT__) || !NO_REALLOC_IN_PLACE */ static void* internal_memalign(mstate m, size_t alignment, size_t bytes) { void* mem = 0; @@ -5444,8 +5453,9 @@ void* dlrealloc_impl(void* oldmem, size_t bytes) { } #endif /* REALLOC_ZERO_BYTES_FREES */ else { - size_t nb = request2size(bytes); mchunkptr oldp = mem2chunk(oldmem); +#if !defined(__UBOOT__) || !NO_REALLOC_IN_PLACE + size_t nb = request2size(bytes); #if ! FOOTERS mstate m = gm; #else /* FOOTERS */ @@ -5484,10 +5494,23 @@ void* dlrealloc_impl(void* oldmem, size_t bytes) { } } } +#else /* defined(__UBOOT__) && NO_REALLOC_IN_PLACE */ + mem = dlmalloc_impl(bytes); + if (mem != 0) { + size_t oc = chunksize(oldp) - overhead_for(oldp); + memcpy(mem, oldmem, (oc < bytes)? oc : bytes); +#ifdef __UBOOT__ + VALGRIND_MALLOCLIKE_BLOCK(mem, bytes, SIZE_SZ, false); + VALGRIND_FREELIKE_BLOCK(oldmem, SIZE_SZ); +#endif + dlfree_impl(oldmem); + } +#endif /* !defined(__UBOOT__) || !NO_REALLOC_IN_PLACE */ } return mem; } +#if !defined(__UBOOT__) || !NO_REALLOC_IN_PLACE void* dlrealloc_in_place(void* oldmem, size_t bytes) { void* mem = 0; if (oldmem != 0) { @@ -5522,6 +5545,7 @@ void* dlrealloc_in_place(void* oldmem, size_t bytes) { } return mem; } +#endif /* !defined(__UBOOT__) || !NO_REALLOC_IN_PLACE */ STATIC_IF_MCHECK void* dlmemalign_impl(size_t alignment, size_t bytes) { From patchwork Mon Dec 1 17:05:11 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 807 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608984; bh=SM+bdrBh0lo/ZJ4na7RDHf7ugdbDza9DOcu3r541t6U=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=mlH4yxNg2SJ9XrJNUw769Mw2lYNhgcgIwCyphmBX0JavyeEeOMLSOO0MGBfOJcZ4c NeXUSwdfWMQ6lsnrOqWc0W44Z5SWiFbX/nSbfYHY5+ky3PxUbT7srauab438tPxGDm RpuusIZf00oMnw0mRsBxpeHN4j0ya+ABAfBg3SvOW7piz77nohUy9TO0G06UL3Z77V O6neoz20q8NEH60EM54p7EPT2ZciZG6mOZW/rSV06X7SRwvdUZT5QIlD7uspznjP6r lXoEIxFVyMZpMC/J5Ayfi40MbZknG+tIyDCNjEFpudNf9yklX0cBkav3FvBoIRtN+V HqJX1lKgkxVOw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 808986888B for ; Mon, 1 Dec 2025 10:09:44 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 57y_op-gl1sn for ; Mon, 1 Dec 2025 10:09:44 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608982; bh=SM+bdrBh0lo/ZJ4na7RDHf7ugdbDza9DOcu3r541t6U=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=cpQKIiv59QaH+IBj6xX56pSvkdL7SPt0gJjW/OTnLzrsuLjQTojQaAGqiWx0SB/lB j1z3sjYQXvE/q1dlDVVyQVOAyoMO525Z/CzcUJBinsy58z1LFFlRqrEbsCZVdm9PQE 7cKsTlryUHIP9YDzU6nhnrrKaxPoFaASk52/u9GrU3LYTMugUn10wZsQ5Kxnu7yNFV n3OTS4vIej95YWpg2iX65ZLLu9btrMrO8I69XabnmYo8y/75Vr9sdATPlTnwJVeXVa 3Nd+JaxKxWC9Ul8+Q/q7hOLvf9Yx6SuHX+NwQilnG6gOqsaVf90InXlUKvNq0Mcywz b1j2yF+zDqCZQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 7243D688AE for ; Mon, 1 Dec 2025 10:09:42 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608979; bh=KinWIQePYPxXFyoV4PsfNw2j+X+EYymU8GwInosIHKk=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=qPITsXgMBq3CauTFJZr3SkWEPrLrg/QefK4inlLjxLHPacKa13xAWyysvFKMVko/M +igPt9f/K9G1knqWvALjmHUBu+D/JKdOBNvzXaIemhSyAPn530A+XeRfqunjpGRWGq 8Hse7bUkwmYKmvC+wrOIb5SuwafH2TpCJHZFd5dXSnKCrqf31O+jCUNOwN0SvZt321 n1HubXhPivETF1Ob8ndIhSDhX7X2alM317qDsRQsL69CCWtsYMUN7+vYFXChWXG932 GpIPDPcuz55KDJt7WXFOJIBS0u+8kxrMLNGOZItVeeibO8JaCzXdnLlxnOEfK/Z2SD kFGGVPPhQkIXA== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 9EB30687F3; Mon, 1 Dec 2025 10:09:39 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id IXTJHzIPNDuG; Mon, 1 Dec 2025 10:09:39 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608979; bh=sGiXkLL91JXLX0OaxhdpI5yT0DVKb1Cx7yyJ85N+wag=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=CB4dZpx+o0IGTbCGp7lrcuB9t+8aZAhatODKTNlV4WUZGjMmD2pbE7zmdGxJ34RvG aHbLqhUWuzQfaBLT2X6tDU0SV8r1s/HNwEi5RdcZD/B/yqi854ZMD0F51JTIHblXvg Y3isHq/XBr9g+Wke1soX4HN97H1LVK8BLUHYG0+20ke4V5atpiwDxWi9nNCiiNa8w+ 1Qeqx5XVDdBCXDxDifC/JtapL87SLzogHbFFpBeQtA+khb6iTL82vnmsAHm55zmVqD 8ZnYOF9VJW5b41XweWttrxPxEw7N+HPIA4N8yz6Aeud1oKNjiZG6o8vYuE+Fq7MbfG tp825TGGy6hYw== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id C9D886881A; Mon, 1 Dec 2025 10:09:38 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:11 -0700 Message-ID: <20251201170529.3237986-33-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: KMQVBOQ5OQ52XQPKSQQDDGZBJM5ULQYR X-Message-ID-Hash: KMQVBOQ5OQ52XQPKSQQDDGZBJM5ULQYR X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 32/37] malloc: Add NO_TREE_BINS option to reduce code size List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add a new NO_TREE_BINS option to disable binary-tree bins for large allocations (>= 256 bytes). While this is an invasive changes, it saves about 1.25K of code size on arm64 (as well as 250 bytes of data). When enabled, all large chunks use a simple doubly-linked list instead of tree bins, trading O(log n) performance for smaller code size. The tradeoff here is that large allocations use O(n) search instead of O(log n) and fragmentation coud also become worse. So performance will suffer when there are lots of large allocations and frees are done. This is rare in SPL. Implementation: - Add a dedicated mchunkptr largebin field to malloc_state - Replace treebins[NTREEBINS] array with a single linked-list pointer - Implement simplified insert/unlink operations using largebin list - Update allocation functions (tmalloc_small/large) for linear search - Update heap checking functions (do_check_treebin, bin_find) to handle linked list traversal instead of tree traversal It is enabled by CONFIG_SYS_MALLOC_SMALL, i.e. by default in SPL. Co-developed-by: Claude Signed-off-by: Simon Glass --- common/dlmalloc.c | 157 +++++++++++++++++++++++++++++++++++++++++++++- 1 file changed, 154 insertions(+), 3 deletions(-) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 4439d278188..13ae0e10918 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -597,6 +597,9 @@ static inline void MALLOC_COPY(void *dest, const void *src, size_t sz) { memcpy( #if CONFIG_IS_ENABLED(SYS_MALLOC_SMALL) #define NO_REALLOC_IN_PLACE 1 +#define NO_TREE_BINS 1 +#else +#define NO_TREE_BINS 0 #endif /* Use simplified sys_alloc for non-sandbox builds */ @@ -2686,7 +2689,11 @@ struct malloc_state { size_t release_checks; size_t magic; mchunkptr smallbins[(NSMALLBINS+1)*2]; +#if defined(__UBOOT__) && NO_TREE_BINS + mchunkptr largebin; /* Single linked list for all large chunks */ +#else tbinptr treebins[NTREEBINS]; +#endif size_t footprint; size_t max_footprint; size_t footprint_limit; /* zero means no limit */ @@ -2914,7 +2921,9 @@ static void do_check_mmapped_chunk(mstate m, mchunkptr p); static void do_check_inuse_chunk(mstate m, mchunkptr p); static void do_check_free_chunk(mstate m, mchunkptr p); static void do_check_malloced_chunk(mstate m, void* mem, size_t s); +#if !defined(__UBOOT__) || !NO_TREE_BINS static void do_check_tree(mstate m, tchunkptr t); +#endif static void do_check_treebin(mstate m, bindex_t i); static void do_check_smallbin(mstate m, bindex_t i); static void do_check_malloc_state(mstate m); @@ -2924,6 +2933,8 @@ static size_t traverse_and_check(mstate m); /* ---------------------------- Indexing Bins ---------------------------- */ +/* When NO_TREE_BINS is enabled, large chunks use a single linked list + in treebin[0] instead of the tree structure */ #define is_small(s) (((s) >> SMALLBIN_SHIFT) < NSMALLBINS) #define small_index(s) (bindex_t)((s) >> SMALLBIN_SHIFT) #define small_index2size(i) ((i) << SMALLBIN_SHIFT) @@ -3397,6 +3408,7 @@ static void do_check_malloced_chunk(mstate m, void* mem, size_t s) { } } +#if !defined(__UBOOT__) || !NO_TREE_BINS /* Check a tree and its subtrees. */ static void do_check_tree(mstate m, tchunkptr t) { tchunkptr head = 0; @@ -3447,9 +3459,28 @@ static void do_check_tree(mstate m, tchunkptr t) { } while (u != t); assert(head != 0); } +#endif /* Check all the chunks in a treebin. */ static void do_check_treebin(mstate m, bindex_t i) { +#if defined(__UBOOT__) && NO_TREE_BINS + /* With NO_TREE_BINS, only index 0 is used for the large bin list */ + if (i == 0) { + mchunkptr p = m->largebin; + if (p != 0) { + /* Check the linked list */ + mchunkptr start = p; + do { + do_check_any_chunk(m, p); + assert(!is_inuse(p)); + assert(!next_pinuse(p)); + assert(p->fd->bk == p); + assert(p->bk->fd == p); + p = p->fd; + } while (p != start); + } + } +#else tbinptr* tb = treebin_at(m, i); tchunkptr t = *tb; int empty = (m->treemap & (1U << i)) == 0; @@ -3457,6 +3488,7 @@ static void do_check_treebin(mstate m, bindex_t i) { assert(empty); if (!empty) do_check_tree(m, t); +#endif } /* Check all the chunks in a smallbin. */ @@ -3498,6 +3530,18 @@ static int bin_find(mstate m, mchunkptr x) { } } else { +#if defined(__UBOOT__) && NO_TREE_BINS + /* With NO_TREE_BINS, all large chunks are in largebin list */ + if (m->largebin != 0) { + mchunkptr p = m->largebin; + mchunkptr start = p; + do { + if (p == x) + return 1; + p = p->fd; + } while (p != start); + } +#else bindex_t tidx; compute_tree_index(size, tidx); if (treemap_is_marked(m, tidx)) { @@ -3515,6 +3559,7 @@ static int bin_find(mstate m, mchunkptr x) { } while ((u = u->fd) != t); } } +#endif } return 0; } @@ -3744,6 +3789,53 @@ static void internal_malloc_stats(mstate m) { /* ------------------------- Operations on trees ------------------------- */ +#if defined(__UBOOT__) && NO_TREE_BINS +/* When tree bins are disabled, use a simple doubly-linked list for all large chunks */ +static void insert_large_chunk(mstate M, tchunkptr X, size_t S) { + mchunkptr XP = (mchunkptr)(X); + mchunkptr F = M->largebin; + (void)S; /* unused in NO_TREE_BINS mode */ + if (F == 0) { + M->largebin = XP; + XP->fd = XP->bk = XP; + } + else if (RTCHECK(ok_address(M, F))) { + mchunkptr B = F->bk; + if (RTCHECK(ok_address(M, B))) { + XP->fd = F; + XP->bk = B; + F->bk = XP; + B->fd = XP; + } + else { + CORRUPTION_ERROR_ACTION(M); + } + } + else { + CORRUPTION_ERROR_ACTION(M); + } +} + +static void unlink_large_chunk(mstate M, tchunkptr X) { + mchunkptr XP = (mchunkptr)(X); + mchunkptr F = XP->fd; + mchunkptr B = XP->bk; + if (F == XP) { + M->largebin = 0; + } + else if (RTCHECK(ok_address(M, F) && F->bk == XP && ok_address(M, B) && B->fd == XP)) { + F->bk = B; + B->fd = F; + if (M->largebin == XP) + M->largebin = F; + } + else { + CORRUPTION_ERROR_ACTION(M); + } +} + +#else /* !defined(__UBOOT__) || !NO_TREE_BINS */ + /* Insert chunk into tree */ #define insert_large_chunk(M, X, S) {\ tbinptr* H;\ @@ -3884,6 +3976,8 @@ static void internal_malloc_stats(mstate m) { }\ } +#endif /* !defined(__UBOOT__) || !NO_TREE_BINS */ + /* Relays to large vs small bin operations */ #define insert_chunk(M, P, S)\ @@ -4593,7 +4687,26 @@ static void dispose_chunk(mstate m, mchunkptr p, size_t psize) { static void* tmalloc_large(mstate m, size_t nb) { tchunkptr v = 0; size_t rsize = -nb; /* Unsigned negation */ +#if !defined(__UBOOT__) || !NO_TREE_BINS tchunkptr t; +#endif +#if defined(__UBOOT__) && NO_TREE_BINS + /* With NO_TREE_BINS, do a linear search through largebin list */ + if (m->largebin != 0) { + mchunkptr p = m->largebin; + mchunkptr start = p; + do { + size_t trem = chunksize(p) - nb; + if (trem < rsize) { + rsize = trem; + v = (tchunkptr)p; + if (rsize == 0) + break; + } + p = p->fd; + } while (p != start); + } +#else bindex_t idx; compute_tree_index(nb, idx); if ((t = *treebin_at(m, idx)) != 0) { @@ -4637,6 +4750,7 @@ static void* tmalloc_large(mstate m, size_t nb) { } t = leftmost_child(t); } +#endif /* If dv is a better fit, return 0 so malloc will use it */ if (v != 0 && rsize < (size_t)(m->dvsize - nb)) { @@ -4662,8 +4776,32 @@ static void* tmalloc_large(mstate m, size_t nb) { /* allocate a small request from the best fitting chunk in a treebin */ static void* tmalloc_small(mstate m, size_t nb) { - tchunkptr t, v; +#if !defined(__UBOOT__) || !NO_TREE_BINS + tchunkptr t; +#endif + tchunkptr v; size_t rsize; +#if defined(__UBOOT__) && NO_TREE_BINS + /* With NO_TREE_BINS, use largebin list for best fit search */ + if (m->largebin != 0) { + mchunkptr p = m->largebin; + mchunkptr best = p; + rsize = chunksize(p) - nb; + /* Scan the list for the best fit */ + mchunkptr start = p; + while ((p = p->fd) != start) { + size_t trem = chunksize(p) - nb; + if (trem < rsize) { + rsize = trem; + best = p; + } + } + v = (tchunkptr)best; + } + else { + return 0; + } +#else bindex_t i; binmap_t leastbit = least_bit(m->treemap); compute_bit2idx(leastbit, i); @@ -4677,6 +4815,7 @@ static void* tmalloc_small(mstate m, size_t nb) { v = t; } } +#endif if (RTCHECK(ok_address(m, v))) { mchunkptr r = chunk_plus_offset(v, nb); @@ -4794,7 +4933,13 @@ void* dlmalloc_impl(size_t bytes) { goto postaction; } - else if (gm->treemap != 0 && (mem = tmalloc_small(gm, nb)) != 0) { + else if ( +#if defined(__UBOOT__) && NO_TREE_BINS + gm->largebin != 0 && +#else + gm->treemap != 0 && +#endif + (mem = tmalloc_small(gm, nb)) != 0) { check_malloced_chunk(gm, mem, nb); goto postaction; } @@ -4804,7 +4949,13 @@ void* dlmalloc_impl(size_t bytes) { nb = MAX_SIZE_T; /* Too big to allocate. Force failure (in sys alloc) */ else { nb = pad_request(bytes); - if (gm->treemap != 0 && (mem = tmalloc_large(gm, nb)) != 0) { + if ( +#if defined(__UBOOT__) && NO_TREE_BINS + gm->largebin != 0 && +#else + gm->treemap != 0 && +#endif + (mem = tmalloc_large(gm, nb)) != 0) { check_malloced_chunk(gm, mem, nb); goto postaction; } From patchwork Mon Dec 1 17:05:12 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 808 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608988; bh=xy//ItUk5GDy7R8LdjbJaIt1IIK+Jzgg20lBBwQLRUM=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=D9+COQR1+fzOLQiscqMn6p8wEJz+9JZF1bqHEfOxMEOXEOixQWZJwvsO94eAPWpsE IT0H3xNOSWiwdJ5+10pcYjgkmboQr2QcYHjJY/A+sqNxVQa8AF6huuc8ZWHBWprDbD yl/AcetfaJKIwjvc65Os5Jw4iiUm0WZlpqYg01XeNTOO5g361Qk6g11uVtp937L1Xb AeteVL/+aEM5z/15Bx3kJdndt9uF7yj//iiU/W0Ek06+azCizyqPFZnbJ26ANmRpMp tmSXVCrYsukYeATkPVjvDziY1ghm5Ing9IfAXRagijdKW985Znk3xd+03mlkmnkXG0 4P45aJ3qgj7tA== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 15ABA688CB for ; Mon, 1 Dec 2025 10:09:48 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id T8aUHZecrHON for ; Mon, 1 Dec 2025 10:09:48 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608988; bh=xy//ItUk5GDy7R8LdjbJaIt1IIK+Jzgg20lBBwQLRUM=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=D9+COQR1+fzOLQiscqMn6p8wEJz+9JZF1bqHEfOxMEOXEOixQWZJwvsO94eAPWpsE IT0H3xNOSWiwdJ5+10pcYjgkmboQr2QcYHjJY/A+sqNxVQa8AF6huuc8ZWHBWprDbD yl/AcetfaJKIwjvc65Os5Jw4iiUm0WZlpqYg01XeNTOO5g361Qk6g11uVtp937L1Xb AeteVL/+aEM5z/15Bx3kJdndt9uF7yj//iiU/W0Ek06+azCizyqPFZnbJ26ANmRpMp tmSXVCrYsukYeATkPVjvDziY1ghm5Ing9IfAXRagijdKW985Znk3xd+03mlkmnkXG0 4P45aJ3qgj7tA== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 03F9B688A4 for ; Mon, 1 Dec 2025 10:09:48 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608986; bh=m/cFPwsxxd2zgSt5m5yhXgmyIlXlaVAt76xRciFJaGY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Y86H3kkcvjaGCwbUlWjdnM+lx8Qj978x7I8h2nGtkaQgmx4+VkKp9lfe54wK063WZ ZrvtEorHnSPeUIFb2HhuQ0BeS5iZTMkJ9oxOWVUPMr7dbcpirshfkiET4Vw/3Lba7t X+O9g7aOw/rc2c9Lhtqm9Yy7nXKHj0okJ243clwlmWS79xNUhpmryjkP/mnmczgR3k C0dH9ag4DoDO5NgqsOEzQyZ0bx2L+u0pemHo9Tv8L8Kx2Ed7nSEvxM9dISHRXrG9yV D2sabc83TR+yJPIMF6FO6N4mKEAZ8DHoqCNm0ArZvz0gx7uCxoR/gjLnDd+KV16BFP dk5wKKczPm9QA== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id C930F68821; Mon, 1 Dec 2025 10:09:46 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id JNmx-_wJ0CYi; Mon, 1 Dec 2025 10:09:46 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608980; bh=6iZhIrUpcIfY+hwed2an898olo376S6LZo5d2iQTtrI=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PK4xbDYmUL4OXOYjELqYPXPDLtaWpqZhqqsiHhaAnu3eCzpnGSFUKlsRo10ONk0+2 0b6PAj24mAngIeNVhx24TXzpVhC7t6Cq2UDi3lZEDF3JUc84loISClbo2wNF1gY9ZB QSsXu15IX2Qp3QIwF4Q2d4/Cp57coL6kLpmcsTSri/vwEWOea68vc1GIfSXZ7TCLQ9 PJ4wmj8I/gMpEJ/AISS3ouqbI0f+qzn4ZywaPp0txJmO9vm32lNjnfg0s/lOWuF5Wg 2RGG4G+vgplV/h4w/ObyEowSCmiejoMkdGTRsfSaWPYmRbMEt6HrL+TeeQyDneYQup J53UOkHhDRoXQ== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id E04BC6881A; Mon, 1 Dec 2025 10:09:39 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:12 -0700 Message-ID: <20251201170529.3237986-34-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: 2AB2RHURPCMNJZDW44BU36VZLXZOLGBX X-Message-ID-Hash: 2AB2RHURPCMNJZDW44BU36VZLXZOLGBX X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 33/37] malloc: Add SIMPLE_MEMALIGN to simplify memalign for code size List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add a new SIMPLE_MEMALIGN to remove the fallback-retry logic from memalign(), to reduce code sizein SPL. The fallback mechanism attempts multiple allocation strategies: 1. Over-allocate to guarantee finding aligned space 2. If that fails, allocate exact size and check if aligned 3. If not aligned, free and retry with calculated extra space While this fallback is useful in low-memory situations, SPL typically has predictable memory usage and sufficient heap space for the initial over-allocation to succeed. The fallback adds code complexity without obvious practical benefit. This reduces code size on imx8mp_venice (for example) SPL by 74 bytes. Co-developed-by: Claude Signed-off-by: Simon Glass --- common/dlmalloc.c | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 13ae0e10918..65bfb97e1db 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -597,9 +597,11 @@ static inline void MALLOC_COPY(void *dest, const void *src, size_t sz) { memcpy( #if CONFIG_IS_ENABLED(SYS_MALLOC_SMALL) #define NO_REALLOC_IN_PLACE 1 +#define SIMPLE_MEMALIGN 1 #define NO_TREE_BINS 1 #else #define NO_TREE_BINS 0 +#define SIMPLE_MEMALIGN 0 #endif /* Use simplified sys_alloc for non-sandbox builds */ @@ -5260,7 +5262,7 @@ static void* internal_memalign(mstate m, size_t alignment, size_t bytes) { size_t nb = request2size(bytes); size_t req = nb + alignment + MIN_CHUNK_SIZE - CHUNK_OVERHEAD; mem = internal_malloc(m, req); -#ifdef __UBOOT__ +#if defined(__UBOOT__) && !SIMPLE_MEMALIGN /* * The attempt to over-allocate (with a size large enough to guarantee the * ability to find an aligned region within allocated memory) failed. From patchwork Mon Dec 1 17:05:13 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 809 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608990; bh=Q8/XPPlHQszDGMdC8wo2DDZOFSUvWF3mMmOomakRa40=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=WxDavQE8pId44yvx3BTESUAAjdyAJamTGLiKtkffBvKvEN5y1Nox6ZH2yX8lezYus BkOVBs5m4LM0Ki3JJp/pwvCo4nitv7bUiWswWd/yWcFOoxBIZDUSmlZm2zEgya6REX d/s+l3XUOwby5Noz9TopybeChToyGLmhjdoSWe396oDkTCjrrrQXqaLDz6+Z2r1ViO fJSXrhPf3COElA4Cud0AjDCJk/tBXMAG9YLNJl0iHa413GkVwwY5utnpnqmvMrG9pE +BiLfXtwc77FCARZLnxoMScz0u8uwk0fEcKIjDKyTXBRo03ZKhsMBAPWzFF2SLdWmd fydQqWqklnZ0A== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 74A8F688C5 for ; Mon, 1 Dec 2025 10:09:50 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id GaesZ3YHC5da for ; Mon, 1 Dec 2025 10:09:50 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608988; bh=Q8/XPPlHQszDGMdC8wo2DDZOFSUvWF3mMmOomakRa40=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=qwKutgh5hZ/KtIVHq49NTrZPy3DBbKreBs5HqEGAdI+Le16QyMehjqSfINcmuJUDZ uPdvSrmmlxfkypwkFApyrClONuj3AdtCEn8sa7UNDnUvWVbSciYrXwVk7lAOQtDlm3 IyIaImGIAHejUIMiPdyzoKs6YaGypr6ABL3Py8leHxRQurBNaMMtTbGynsqdGmE2Ij 4yJVz3mT4TMw8xIZ9qsq4gjv8ZU0AIQkBqewCh0oFp6Opf9SmegJL0gr6+keCIlfD3 pOM4z/kql/UPoWpftxNbjqO/kXv/bOx2aWzoZwIEv6WcicE1Ra8EHp7Kac7MzBy8PC 4xIfeeqExR92A== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id AB28B6888B for ; Mon, 1 Dec 2025 10:09:48 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608986; bh=vH6Cs5byjz26VtO4X/BQHVbOrsYQ2RodQ1D5QSlF4cs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=BWb6TdptJuB8gnuFKcLJj/ZfKcICkUVcysUgOsURWPpTVMu4xJCDYMNkOZBOQPY5k sNHq2auDrTXx+LBmIt3SNnZKg693E2OGX/dRKiVbKz3e+nSO8HcEM1n0RUBDHeCG8s pNnwppRob9dKB0dLmi7RGXciO5v0mCLZB1zc9S3xphsI4ZjgEn4LiGQTq1dLzz4B0Y Dsnr2Z7Oqk1R1tA1PWe+RXEO7HTe/2Mr/rvzTl6q6GbY0LBKts8pE7gyAlAOdA01B6 imcsQsHX95E8WegIH3iBltn/esBnx7fdKfp3wmPyunvgazo7pAqdTFhXzmaGf4uoaa sGHmz0m7tJ+TQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id D246F6881A; Mon, 1 Dec 2025 10:09:46 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id InKDHup7YGRg; Mon, 1 Dec 2025 10:09:46 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608984; bh=GUNLWgH72l4NzyCoeAg461uibxVmKNdGa56EUy141OY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=L0X1Rra4ED+yjmKyiSCPOuLxgXVLAAaOv7mPBXHKIX8GbaYggeLGfvqBCjDP1VdNV 0dfKmX8SU1e8HxBh4v8C7v1zbAMewEMOErUP2PKj0OTMpNSoZ3F0/ACSl/Q8QbBTpK JpgG8zj427qsOV+3nBKekcvvFETqPyHZS6/tT3CAbay8uqylmsdz0oIbmarmTk9xuA /DGwzTyOhkITbAGGWmXKWPt+BZgnE1wgGwpX2tJB4eaChq9arKSnnAzhdUOR+cSopl Hq4bThTfW/cAaabBfRKR1sNsmAPD0TM1Aaxgd1pzxIJBbhw5s+8uv2qVHFe5reOfi9 pDAvKUgvrYIdg== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 927B4688AE; Mon, 1 Dec 2025 10:09:44 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:13 -0700 Message-ID: <20251201170529.3237986-35-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: YPX3VZ2WIWHH2L3NRIOJZDKLX3AQTS32 X-Message-ID-Hash: YPX3VZ2WIWHH2L3NRIOJZDKLX3AQTS32 X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 34/37] malloc: Add SMALLCHUNKS_AS_FUNCS to convert macros to funcs List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass The insert_small_chunk() and unlink_first_small_chunk() macros are inlined at multiple places in the code. Provide an option to convert these to functions, so the compiler can try to reduce code size. Add braces to the insert_chunk macro. This reduces code size imx8mp_venice SPL by about 208 bytes Co-developed-by: Claude Signed-off-by: Simon Glass --- common/dlmalloc.c | 50 +++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 50 insertions(+) diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 65bfb97e1db..54fd2e4a08a 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -599,6 +599,7 @@ static inline void MALLOC_COPY(void *dest, const void *src, size_t sz) { memcpy( #define NO_REALLOC_IN_PLACE 1 #define SIMPLE_MEMALIGN 1 #define NO_TREE_BINS 1 +#define SMALLCHUNKS_AS_FUNCS 1 #else #define NO_TREE_BINS 0 #define SIMPLE_MEMALIGN 0 @@ -632,6 +633,10 @@ ulong mem_malloc_brk; #endif /* __UBOOT__ */ +#ifndef SMALLCHUNKS_AS_FUNCS +#define SMALLCHUNKS_AS_FUNCS 0 +#endif + #ifndef WIN32 #ifdef _WIN32 #define WIN32 1 @@ -3714,6 +3719,25 @@ static void internal_malloc_stats(mstate m) { */ /* Link a free chunk into a smallbin */ +#if defined(__UBOOT__) && SMALLCHUNKS_AS_FUNCS +static void insert_small_chunk(mstate M, mchunkptr P, size_t S) { + bindex_t I = small_index(S); + mchunkptr B = smallbin_at(M, I); + mchunkptr F = B; + assert(S >= MIN_CHUNK_SIZE); + if (!smallmap_is_marked(M, I)) + mark_smallmap(M, I); + else if (RTCHECK(ok_address(M, B->fd))) + F = B->fd; + else { + CORRUPTION_ERROR_ACTION(M); + } + B->fd = P; + F->bk = P; + P->fd = F; + P->bk = B; +} +#else #define insert_small_chunk(M, P, S) {\ bindex_t I = small_index(S);\ mchunkptr B = smallbin_at(M, I);\ @@ -3731,6 +3755,7 @@ static void internal_malloc_stats(mstate m) { P->fd = F;\ P->bk = B;\ } +#endif /* Unlink a chunk from a smallbin */ #define unlink_small_chunk(M, P, S) {\ @@ -3759,6 +3784,24 @@ static void internal_malloc_stats(mstate m) { } /* Unlink the first chunk from a smallbin */ +#if defined(__UBOOT__) && SMALLCHUNKS_AS_FUNCS +static void unlink_first_small_chunk(mstate M, mchunkptr B, mchunkptr P, bindex_t I) { + mchunkptr F = P->fd; + assert(P != B); + assert(P != F); + assert(chunksize(P) == small_index2size(I)); + if (B == F) { + clear_smallmap(M, I); + } + else if (RTCHECK(ok_address(M, F) && F->bk == P)) { + F->bk = B; + B->fd = F; + } + else { + CORRUPTION_ERROR_ACTION(M); + } +} +#else #define unlink_first_small_chunk(M, B, P, I) {\ mchunkptr F = P->fd;\ assert(P != B);\ @@ -3775,6 +3818,7 @@ static void internal_malloc_stats(mstate m) { CORRUPTION_ERROR_ACTION(M);\ }\ } +#endif /* Replace dv node, binning the old one */ /* Used only when dvsize known to be small */ @@ -3982,9 +4026,15 @@ static void unlink_large_chunk(mstate M, tchunkptr X) { /* Relays to large vs small bin operations */ +#if defined(__UBOOT__) && SMALLCHUNKS_AS_FUNCS +#define insert_chunk(M, P, S)\ + if (is_small(S)) { insert_small_chunk(M, P, S); }\ + else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); } +#else #define insert_chunk(M, P, S)\ if (is_small(S)) insert_small_chunk(M, P, S)\ else { tchunkptr TP = (tchunkptr)(P); insert_large_chunk(M, TP, S); } +#endif #define unlink_chunk(M, P, S)\ if (is_small(S)) unlink_small_chunk(M, P, S)\ From patchwork Mon Dec 1 17:05:14 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 811 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608997; bh=1+Q+G9Wl+ziUeyAt9aSCzfxaUes/j8uN6C4rtZ7qBGo=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=Pxq6g/cO0xvB3F07zRMAZuPlvVm5bnyFy7w3nO4xHZ7wx04zfQCAbNeEhQ4KhJiSn 9KJPHJUEx/wHysS2ej2Mj9RzE0qbpopATWH7w66StsAK1YB78Kut/lItFWkH/eKdVg 8ZhGygCMbeyJXBLijOA3fCyS964Vp5mCNNq92zyl5iO2j5kDQ9AWsG7IXsoTj/ufde +RoXgWFKa4ZnLs+RYOs7QMu1dHwCBYIjkEp7s6L3ZQZo6aYoWE8+Z5x23SbzSUR5mn prkFy9cNV3Fb8viINZWYaPoZMDLyZsP70kvy1ucUO5fcj+cFGS5b9NL61Qb9iDUKBt hAqPOVVHgT3Rg== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 743216881A for ; Mon, 1 Dec 2025 10:09:57 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id 8UmOaCalKXUx for ; Mon, 1 Dec 2025 10:09:57 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608995; bh=1+Q+G9Wl+ziUeyAt9aSCzfxaUes/j8uN6C4rtZ7qBGo=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=cFQfGrEHMYrtZwawn+HVIogyKuNqvVYvgX8ppLc/5D00QwkKSZ5j7BwzeNLtEkRhc VNGjwmcsSrmiGMKuhWjvH6a/yDsgEmmmqs9rLgDTXfYYlC455B5hHwXymt2pbVw5Ri hol00LImOYAvq56lmQaUgkWat4Lnoto8UdgiKlbgsPjGsVJ4X3OpUAKkIsXAXdY/xo bIjRIvjdX1c2GLUsgTP+0QHxX+pUkV9Zikx/ZgbcQ82dRjRFcLrHXGrlfDL3W/NgH9 u7sj4ImbJvKvT22g8i9GSPLkkzNyW6IwJlrV7ufNfJsrkJGOJ8z/ak6RxLC+NYgeuk 8ctXllxondsdQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id 70AF368821 for ; Mon, 1 Dec 2025 10:09:55 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608992; bh=o2bMKW6WTWQCvtcTuMBsQEq7ughO2n+en1ZsIZiG560=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=V/pEoujxMmUnsOv0kAcSp40hBVcJCzI1iJBMukKq2AaSIEKhVwaZtLtnINtMgaoDj 8g95Y/Cb9AZVC7dCcW4fYMvdfzbpvzkE0ch34uECV61bESdbOWyICuwDwhlHFlP90s 8H2Z5jQZk1zHWvawAF6H+vhwCTRmj9CDMlosaeJMAKBrHxdGYtWK3nvw1niCR4upZf xQ9CcYInRVPHsrAXHJilZlUXvL5AjvmgbjwD2mI1ihyUEgzNzMg3of9HLFgkcUHXGU /7rcw3c/qaMjT/1ar8gqCc2NRd3C5QVj1IUqZ7aPhGvuxJIXy/ITh0NAincl+qvRnn WC7JDimQiSvlw== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id C81966888B; Mon, 1 Dec 2025 10:09:52 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id nIszsRnysX4p; Mon, 1 Dec 2025 10:09:52 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608985; bh=b4vnHqkQBniO+6dkMeZNmUe5CvQ3vUPW0UI4DVgtr+M=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=PvOX9TzzEhro5nTwMrnqtBJgGWQE5lXrS9sqOO/atd05iTzzKqDc4ivTulfFJsCb4 RvVYfTAKz1p5HCBquV8cUS/StAj9t9H/iKa376rhYWbW+T3DlNDkFUjht0/nE7g42y wTLPkOwaV/BPjffShJfKgmLoUGr5lCEPaeFiNjiJgDfgaRlUyvE7bAejV2F3xlXc5I 4DzK9IPSWDXW82uZjNGLzthDDpUWG8lyDYIMf6STOXuM7mHMJO+xSVI6m1gjxxNTue UTX/5/Rm97e3u52vB/e8sfuSPfcmzMg0bXEAjTMx7sRu2uTUPjJVhmTK+r0pcWMlG+ RKUBesu8JOiAA== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 740F6687F3; Mon, 1 Dec 2025 10:09:45 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:14 -0700 Message-ID: <20251201170529.3237986-36-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: 7QBS6LYKXBDHE7S64ORD4O4QEBKKDEQT X-Message-ID-Hash: 7QBS6LYKXBDHE7S64ORD4O4QEBKKDEQT X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 35/37] test: Add some tests for dlmalloc List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass We know or assume that dlmalloc itself works correctly, but there is still the possibility that the U-Boot integration has bugs. Add a test suite for the malloc() implementation, covering: - Basic malloc/free operations - Edge cases (zero size, NULL pointer handling) - realloc() in various scenarios - memalign() with different alignments - Multiple allocations and fragmentation - malloc_enable_testing() failure simulation - Large allocations (1MB, 16MB) - Full pool allocation (CONFIG_SYS_MALLOC_LEN plus environment size) - Fill pool test with random sizes Co-developed-by: Claude Signed-off-by: Simon Glass --- test/common/Makefile | 1 + test/common/malloc.c | 629 +++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 630 insertions(+) create mode 100644 test/common/malloc.c diff --git a/test/common/Makefile b/test/common/Makefile index a5df946396a..9674bbec030 100644 --- a/test/common/Makefile +++ b/test/common/Makefile @@ -13,5 +13,6 @@ obj-$(CONFIG_CONSOLE_PAGER) += console.o obj-$(CONFIG_CYCLIC) += cyclic.o obj-$(CONFIG_EVENT_DYNAMIC) += event.o obj-y += cread.o +obj-y += malloc.o obj-$(CONFIG_CONSOLE_PAGER) += pager.o obj-$(CONFIG_$(PHASE_)CMDLINE) += print.o diff --git a/test/common/malloc.c b/test/common/malloc.c new file mode 100644 index 00000000000..b114267dd83 --- /dev/null +++ b/test/common/malloc.c @@ -0,0 +1,629 @@ +// SPDX-License-Identifier: GPL-2.0+ +/* + * Tests for malloc() implementation + * + * Copyright 2025 Google LLC + * Written by Simon Glass + */ + +#include +#include +#include +#include +#include +#include +#include +#include +#include + +DECLARE_GLOBAL_DATA_PTR; + +/* + * get_alloced_size() - Get currently allocated memory size + * + * Return: Number of bytes currently allocated (not freed) + */ +static int get_alloced_size(void) +{ + struct mallinfo info = mallinfo(); + + return info.uordblks; +} + +/* Test basic malloc() and free() */ +static int common_test_malloc_basic(struct unit_test_state *uts) +{ + int before; + void *ptr; + + before = get_alloced_size(); + + ptr = malloc(100); + ut_assertnonnull(ptr); + + ut_assert(get_alloced_size() >= before + 100); + + free(ptr); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_malloc_basic, 0); + +/* Test malloc() with zero size and free(NULL) */ +static int common_test_malloc_zero(struct unit_test_state *uts) +{ + int before; + void *ptr; + + before = get_alloced_size(); + + ptr = malloc(0); + ut_assertnonnull(ptr); + free(ptr); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_malloc_zero, 0); + +/* Test calloc() zeros memory */ +static int common_test_calloc(struct unit_test_state *uts) +{ + int before, i; + char *ptr; + + before = get_alloced_size(); + + ptr = calloc(100, 1); + ut_assertnonnull(ptr); + + for (i = 0; i < 100; i++) + ut_asserteq(0, ptr[i]); + + ut_assert(get_alloced_size() >= before + 100); + + free(ptr); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_calloc, 0); + +/* Test realloc() to larger size */ +static int common_test_realloc_larger(struct unit_test_state *uts) +{ + char *ptr, *ptr2; + int before, i; + + before = get_alloced_size(); + + ptr = malloc(50); + ut_assertnonnull(ptr); + + for (i = 0; i < 50; i++) + ptr[i] = i; + + ptr2 = realloc(ptr, 100); + ut_assertnonnull(ptr2); + + /* + * Check original data preserved + */ + for (i = 0; i < 50; i++) + ut_asserteq(i, ptr2[i]); + + free(ptr2); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_realloc_larger, 0); + +/* Test realloc() to smaller size */ +static int common_test_realloc_smaller(struct unit_test_state *uts) +{ + char *ptr, *ptr2; + int before, i; + + before = get_alloced_size(); + + ptr = malloc(100); + ut_assertnonnull(ptr); + + for (i = 0; i < 100; i++) + ptr[i] = i; + + ptr2 = realloc(ptr, 50); + ut_assertnonnull(ptr2); + + /* + * Check data preserved + */ + for (i = 0; i < 50; i++) + ut_asserteq(i, ptr2[i]); + + free(ptr2); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_realloc_smaller, 0); + +/* Test realloc() with NULL pointer (should act like malloc) */ +static int common_test_realloc_null(struct unit_test_state *uts) +{ + int before; + void *ptr; + + before = get_alloced_size(); + + ptr = realloc(NULL, 100); + ut_assertnonnull(ptr); + ut_assert(get_alloced_size() >= before + 100); + + free(ptr); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_realloc_null, 0); + +/* + * Test realloc() with zero size + * + * Standard dlmalloc behavior (without REALLOC_ZERO_BYTES_FREES or mcheck): + * realloc(ptr, 0) returns a minimum-sized allocation. + */ +static int common_test_realloc_zero(struct unit_test_state *uts) +{ + void *ptr, *ptr2; + int before; + + before = get_alloced_size(); + + ptr = malloc(100); + ut_assertnonnull(ptr); + ut_assert(get_alloced_size() >= before + 100); + + ptr2 = realloc(ptr, 0); + + /* + * dlmalloc returns a minimum-sized allocation for realloc(ptr, 0) + * since REALLOC_ZERO_BYTES_FREES is not enabled. + * It may realloc in-place or return a different pointer. + */ + ut_assertnonnull(ptr2); + + free(ptr2); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_realloc_zero, 0); + +/* Test memalign() with various alignments */ +static int common_test_memalign(struct unit_test_state *uts) +{ + int before; + void *ptr; + + before = get_alloced_size(); + + /* + * Test power-of-2 alignments + */ + ptr = memalign(16, 100); + ut_assertnonnull(ptr); + ut_asserteq(0, (ulong)ptr & 0xf); + free(ptr); + + ptr = memalign(256, 100); + ut_assertnonnull(ptr); + ut_asserteq(0, (ulong)ptr & 0xff); + free(ptr); + + ptr = memalign(4096, 100); + ut_assertnonnull(ptr); + ut_asserteq(0, (ulong)ptr & 0xfff); + free(ptr); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_memalign, 0); + +/* Test multiple allocations */ +static int common_test_malloc_multiple(struct unit_test_state *uts) +{ + int expected = 0, before, i; + void *ptrs[10]; + + before = get_alloced_size(); + + /* Allocate multiple blocks */ + for (i = 0; i < 10; i++) { + ptrs[i] = malloc((i + 1) * 100); + ut_assertnonnull(ptrs[i]); + expected += (i + 1) * 100; + } + + /* Should have allocated at least the requested amount */ + ut_assert(get_alloced_size() >= before + expected); + + /* Free in reverse order */ + for (i = 9; i >= 0; i--) + free(ptrs[i]); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_malloc_multiple, 0); + +/* Test malloc() failure when testing enabled */ +static int common_test_malloc_failure(struct unit_test_state *uts) +{ + void *ptr1, *ptr2, *ptr3; + int before; + + before = get_alloced_size(); + + /* Enable failure after 2 allocations */ + malloc_enable_testing(2); + + ptr1 = malloc(100); + ut_assertnonnull(ptr1); + + ptr2 = malloc(100); + ut_assertnonnull(ptr2); + + /* This should fail */ + ptr3 = malloc(100); + ut_assertnull(ptr3); + + malloc_disable_testing(); + + /* Should work again */ + ptr3 = malloc(100); + ut_assertnonnull(ptr3); + + free(ptr1); + free(ptr2); + free(ptr3); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_malloc_failure, 0); + +/* Test realloc() failure when testing enabled */ +static int common_test_realloc_failure(struct unit_test_state *uts) +{ + void *ptr1, *ptr2; + int before; + + before = get_alloced_size(); + + ptr1 = malloc(50); + ut_assertnonnull(ptr1); + + /* Enable failure after 0 allocations */ + malloc_enable_testing(0); + + /* This should fail and return NULL, leaving ptr1 intact */ + ptr2 = realloc(ptr1, 100); + ut_assertnull(ptr2); + + malloc_disable_testing(); + + /* ptr1 should still be valid, try to realloc it */ + ptr2 = realloc(ptr1, 100); + ut_assertnonnull(ptr2); + + free(ptr2); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_realloc_failure, 0); + +/* Test large allocation */ +static int common_test_malloc_large(struct unit_test_state *uts) +{ + int size = SZ_1M, before; + void *ptr; + + before = get_alloced_size(); + + ptr = malloc(size); + ut_assertnonnull(ptr); + memset(ptr, 0x5a, size); + + ut_assert(get_alloced_size() >= before + size); + + free(ptr); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_malloc_large, 0); + +/* Test many small allocations (tests binning) */ +static int common_test_malloc_small_bins(struct unit_test_state *uts) +{ + int after_free, before, i; + void *ptrs[100]; + + before = get_alloced_size(); + + /* Allocate many small blocks of various sizes */ + for (i = 0; i < 100; i++) { + ptrs[i] = malloc((i % 32) + 8); + ut_assertnonnull(ptrs[i]); + } + + /* Free every other one to create fragmentation */ + for (i = 0; i < 100; i += 2) + free(ptrs[i]); + + after_free = get_alloced_size(); + + /* Allocate more to test reuse */ + for (i = 0; i < 100; i += 2) { + ptrs[i] = malloc((i % 32) + 8); + ut_assertnonnull(ptrs[i]); + } + + /* Should be back to roughly the same size (may vary due to overhead) */ + ut_assert(get_alloced_size() >= after_free); + + /* Free all */ + for (i = 0; i < 100; i++) + free(ptrs[i]); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_malloc_small_bins, 0); + +/* Test alternating allocation sizes */ +static int common_test_malloc_alternating(struct unit_test_state *uts) +{ + void *small1, *large1, *small2, *large2; + int before; + + before = get_alloced_size(); + + small1 = malloc(32); + ut_assertnonnull(small1); + + large1 = malloc(8192); + ut_assertnonnull(large1); + + small2 = malloc(64); + ut_assertnonnull(small2); + + large2 = malloc(16384); + ut_assertnonnull(large2); + + ut_assert(get_alloced_size() >= before + 32 + 8192 + 64 + 16384); + + free(small1); + free(large1); + free(small2); + free(large2); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_malloc_alternating, 0); + +/* Test malloc() with boundary sizes */ +static int common_test_malloc_boundaries(struct unit_test_state *uts) +{ + int before; + void *ptr; + + before = get_alloced_size(); + + /* Test allocation right at small/large boundary (typically 256 bytes) */ + ptr = malloc(256); + ut_assertnonnull(ptr); + free(ptr); + + ptr = malloc(255); + ut_assertnonnull(ptr); + free(ptr); + + ptr = malloc(257); + ut_assertnonnull(ptr); + free(ptr); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_malloc_boundaries, 0); + +/* Test malloc_usable_size() */ +static int common_test_malloc_usable_size(struct unit_test_state *uts) +{ + int before, size; + void *ptr; + + before = get_alloced_size(); + + ptr = malloc(100); + ut_assertnonnull(ptr); + + size = malloc_usable_size(ptr); + /* Usable size should be at least the requested size */ + ut_assert(size >= 100); + + free(ptr); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_malloc_usable_size, 0); + +/* Test mallinfo() returns reasonable values */ +static int common_test_mallinfo(struct unit_test_state *uts) +{ + void *ptr1, *ptr2, *ptr3; + struct mallinfo info; + int arena_before; + int used_after1; + int used_after2; + int before; + + before = get_alloced_size(); + + info = mallinfo(); + arena_before = info.arena; + + ptr1 = malloc(1024); + ut_assertnonnull(ptr1); + + info = mallinfo(); + /* Arena size should not change (it's the total heap size) */ + ut_asserteq(arena_before, info.arena); + /* Used memory should increase */ + ut_assert(info.uordblks >= before + 1024); + used_after1 = info.uordblks; + + ptr2 = malloc(2048); + ut_assertnonnull(ptr2); + + info = mallinfo(); + ut_asserteq(arena_before, info.arena); + ut_assert(info.uordblks >= used_after1 + 2048); + used_after2 = info.uordblks; + + ptr3 = malloc(512); + ut_assertnonnull(ptr3); + + info = mallinfo(); + ut_asserteq(arena_before, info.arena); + ut_assert(info.uordblks >= used_after2 + 512); + + free(ptr1); + free(ptr2); + free(ptr3); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_mallinfo, 0); + +/* Test allocating a very large size */ +static int common_test_malloc_very_large(struct unit_test_state *uts) +{ + size_t size, before; + void *ptr; + + before = get_alloced_size(); + size = TOTAL_MALLOC_LEN - before - SZ_64K; + + ptr = malloc(size); + ut_assertnonnull(ptr); + ut_assert(get_alloced_size() >= before + size); + + free(ptr); + + ut_asserteq(before, get_alloced_size()); + + return 0; +} +COMMON_TEST(common_test_malloc_very_large, 0); + +/* Test allocating the full malloc pool size */ +static int common_test_malloc_full_pool(struct unit_test_state *uts) +{ + /* Try to allocate the full pool size - should fail due to overhead */ + ut_assertnull(malloc(TOTAL_MALLOC_LEN)); + + return 0; +} +COMMON_TEST(common_test_malloc_full_pool, 0); + +/* Test filling the entire malloc pool with allocations */ +static int common_test_malloc_fill_pool(struct unit_test_state *uts) +{ + int alloc_size, before, count, i, total; + const int ptr_table_size = 0x100000; + void **ptrs; + void *ptr; + + /* + * this is only really safe on sandbox since it uses up all memory and + * assumed that at least half of the malloc() pool is unallocated + */ + if (!IS_ENABLED(CONFIG_SANDBOX)) + return -EAGAIN; + + before = get_alloced_size(); + + /* Use memory outside malloc pool to store pointers */ + ptrs = map_sysmem(0x1000, ptr_table_size); + + /* Allocate until we run out of memory, using random sizes */ + count = 0; + total = 0; + while (1) { + /* Random size up to 1 MB */ + alloc_size = rand() % (SZ_1M); + ptr = malloc(alloc_size); + if (!ptr) + break; + ptrs[count++] = ptr; + total += alloc_size; + /* Safety check to avoid infinite loop */ + if (count >= ptr_table_size / sizeof(void *)) + break; + } + printf("count %d total %d ptr_table_size %d\n", count, total, + ptr_table_size); + + /* + * Should have allocated most of the pool - if we can't allocate + * 1MB, then at most 1MB is available, so we must have allocated + * at least (pool_size - 1MB) + */ + ut_assert(count > 0); + ut_assert(count < ptr_table_size / sizeof(void *)); + ut_assert(get_alloced_size() >= TOTAL_MALLOC_LEN - SZ_1M); + + /* Free all allocations */ + for (i = 0; i < count; i++) + free(ptrs[i]); + + /* Should be back to starting state */ + ut_asserteq(before, get_alloced_size()); + + /* Verify we can allocate large blocks again */ + ptr = malloc(TOTAL_MALLOC_LEN / 2); + ut_assertnonnull(ptr); + free(ptr); + + unmap_sysmem(ptrs); + + return 0; +} +COMMON_TEST(common_test_malloc_fill_pool, 0); From patchwork Mon Dec 1 17:05:15 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 810 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608994; bh=D4tma5FiaUR1MP8u+xmfmf5cDqUtyCxbmzVOsA0XJsA=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=uIhngk/96rBKAonYj+xmi//wV1vvkuSw4OXMi1yGdUQY4No1q19ixdi8hiIbWyf1w +jVPeYnQ/mObepyXWtMZPdfCJOdzhOr/ATdxXIbqormrRFJp5vb3AHVkySp3jr5L9E zTiOKk5jJ/0Kfy7WY8PNdSFQXZKp7q60m2VSBVP6oNyT6pLfnc3hMJXi8wEr62B36a Idz7hTp3nC35s8CXy3QZovgzn3Fk10uEQueKeB4ycnCTi+Il7ILeUl83V/8lzk1gyc hC4Vpni3NsjNKkryLiRjbODG89WNX+PCxHye/9wo6F68DAhWchdaiOihStTNHo9P8+ /6OkxwYJmX2qA== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id E9258688AE for ; Mon, 1 Dec 2025 10:09:54 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id FiNtooUogSsv for ; Mon, 1 Dec 2025 10:09:54 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608994; bh=D4tma5FiaUR1MP8u+xmfmf5cDqUtyCxbmzVOsA0XJsA=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=uIhngk/96rBKAonYj+xmi//wV1vvkuSw4OXMi1yGdUQY4No1q19ixdi8hiIbWyf1w +jVPeYnQ/mObepyXWtMZPdfCJOdzhOr/ATdxXIbqormrRFJp5vb3AHVkySp3jr5L9E zTiOKk5jJ/0Kfy7WY8PNdSFQXZKp7q60m2VSBVP6oNyT6pLfnc3hMJXi8wEr62B36a Idz7hTp3nC35s8CXy3QZovgzn3Fk10uEQueKeB4ycnCTi+Il7ILeUl83V/8lzk1gyc hC4Vpni3NsjNKkryLiRjbODG89WNX+PCxHye/9wo6F68DAhWchdaiOihStTNHo9P8+ /6OkxwYJmX2qA== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id D64946888B for ; Mon, 1 Dec 2025 10:09:54 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608992; bh=yqtSDOJapp+0eyZItdv9zXe0lO8OZhqUwEIl/UejDmA=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=Opu2VVgbcsa+hu/aRVU2qp3pk8ZVhOPYqIWTiIJNAKpNRTPfRTkoR077+v7ZU2XKh 0zkMb+BiAgyEcfl8fCJ69MoNHiMV/yHlddUXPrHblsawSt4TDfSo75ihUy+usK9UmC xyIM/+d342y+hpBy2OKrQ3Sjn54gqMPeESQt5oJT/wCKnB/mXWJ/26wy1qultk0iVU E8ZMWnM/zTBvcZ9bXmTsv38m+ltDpytNEh1V6ksyJxZvOXypA88B685ISZksyijdSx e4JWaYO6Yid/QXXkDZH+907aRDvjMYukG2CnEXvBeNAvaa24CDslP5lfss4iaiXQaN Wmdr5dQgNOYhQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id C6A4B68821; Mon, 1 Dec 2025 10:09:52 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id PH1VH7g7ZBpz; Mon, 1 Dec 2025 10:09:52 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608990; bh=9IgR2S4Ytv33dgWDweIx4Zbc6GHcz4xyz0jqX6Tp6yY=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=ZaQnKB8OFKZYW8NgK0Uviw1e7pvagvvGO/N460UlKZBKK+WpZE50uXNWuWM4GmWFh 6hDzquUe5j7/hoKpbo4JTkvPAH7m0cwD7ZX7h4EKEPWO8ANjDkngD1GvZ0iotse1re /mNKfP/y1+5VeZ/dnO4pgboBMflJfD6d8U81mJ5Emj6oNCwX0BPFCjI/URb+eQ22pp 18FzLeKnM42jcclEHUxksI6d30L8CGFhd4JYSw1XuVKzqeTmkPa2VOePSuTd7dBDWb ESJdJJCvZabZSHHLQ9B5z0OhtaeZrVeoI8ijimQ/fZJ7clfRLBoGnbKxRRGWpfbOBI gqhdfq57qVulQ== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 34704688A4; Mon, 1 Dec 2025 10:09:50 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:15 -0700 Message-ID: <20251201170529.3237986-37-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: LX7QHR7UUIZMBPBVZUKRTYWRWSVUXG5C X-Message-ID-Hash: LX7QHR7UUIZMBPBVZUKRTYWRWSVUXG5C X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 36/37] malloc: Switch to the new malloc() implementation List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add CONFIG_SYS_MALLOC_LEGACY to select the current allocator and adjust the header-file and Makefile rule to use the new dlmalloc implementation. Co-developed-by: Claude Signed-off-by: Simon Glass --- Kconfig | 16 ++++++++++++++++ common/Makefile | 4 ++++ include/malloc.h | 19 +++++++------------ 3 files changed, 27 insertions(+), 12 deletions(-) diff --git a/Kconfig b/Kconfig index c4a65597035..378ecfb1867 100644 --- a/Kconfig +++ b/Kconfig @@ -511,6 +511,22 @@ config SPL_SYS_MALLOC_SMALL If unsure, say Y to minimize SPL code size. +config SYS_MALLOC_LEGACY + bool "Use legacy dlmalloc 2.6.6 instead of dlmalloc 2.8.6" + help + Select this option to use the older dlmalloc 2.6.6 implementation + instead of the newer 2.8.6 version. The legacy allocator uses a + simpler bin system, has larger code size in most cases and uses more + static data. + + The legacy allocator may have slightly worse fragmentation behavior + for some workloads but has been well-tested over many years in U-Boot. + + This option is provided for compatibility and testing. New boards + should use the default dlmalloc 2.8.6. + + If unsure, say N to use the modern allocator. + config TOOLS_DEBUG bool "Enable debug information for tools" help diff --git a/common/Makefile b/common/Makefile index ffa46ce5e06..fdf4cff94f4 100644 --- a/common/Makefile +++ b/common/Makefile @@ -71,7 +71,11 @@ obj-$(CONFIG_BOUNCE_BUFFER) += bouncebuf.o obj-$(CONFIG_$(PHASE_)SERIAL) += console.o obj-$(CONFIG_CROS_EC) += cros_ec.o +ifdef CONFIG_SYS_MALLOC_LEGACY obj-y += dlmalloc_old.o +else +obj-y += dlmalloc.o +endif obj-$(CONFIG_$(PHASE_)SYS_MALLOC_F) += malloc_simple.o obj-$(CONFIG_$(PHASE_)CYCLIC) += cyclic.o diff --git a/include/malloc.h b/include/malloc.h index 997651e5c9c..73b2da0c383 100644 --- a/include/malloc.h +++ b/include/malloc.h @@ -1,15 +1,4 @@ /* SPDX-License-Identifier: GPL-2.0+ */ -/* - * Stub header to include the old malloc header - * - * This allows the old malloc implementation to be preserved while - * preparing for a new dlmalloc version. - */ - -#include - -#if 0 /* not active yet */ - /* Default header file for malloc-2.8.x, written by Doug Lea and released to the public domain, as explained at @@ -32,6 +21,12 @@ * If MSPACES is defined, declarations for mspace versions are included. */ +#ifdef CONFIG_SYS_MALLOC_LEGACY + +#include + +#else + #ifndef MALLOC_280_H #define MALLOC_280_H @@ -748,4 +743,4 @@ int initf_malloc(void); #endif /* MALLOC_280_H */ -#endif /* not active yet */ +#endif /* !CONFIG_SYS_MALLOC_LEGACY */ From patchwork Mon Dec 1 17:05:16 2025 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Simon Glass X-Patchwork-Id: 812 Return-Path: X-Original-To: u-boot-concept@u-boot.org Delivered-To: u-boot-concept@u-boot.org DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608997; bh=A9MYJAyHg7Q5+agSUm0DEqiNZyh/8FF/j6L1KnHKb/s=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=ertEmNe8Bb815MwGsiYz64sKd9LjDWw6a5bwx1Sgv4sGhCwd9sWXnorK/cYjbFVpA tZ6m8r3I5Nh9QYITOwu4KX3fUH3WPkqXIQPXU7AHadUnhpd260KvFQTYwzaygQCY+s BxeKaaFPgK8kgHGCmuwM9F+2hfVHgdFSuzefd60SL0RUGhN2hnK8SXym0fqej+RYhr nyILdM9R420Nhk18++z453INGISsYNlSYi3dhdYaFwFuul8RgQEpkfbsGfNQszXi4m Us0i0I67V59eNbKZR4Obhl/DNOCRym2bf5UyaqUQ3DpP/z20cIyeQ9LtGL13uHry6w EX56O7Fr9V8DQ== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id D0D9A688AE for ; Mon, 1 Dec 2025 10:09:57 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10024) with ESMTP id t4amlRlNyUVY for ; Mon, 1 Dec 2025 10:09:57 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608995; bh=A9MYJAyHg7Q5+agSUm0DEqiNZyh/8FF/j6L1KnHKb/s=; h=From:To:Date:In-Reply-To:References:CC:Subject:List-Id: List-Archive:List-Help:List-Owner:List-Post:List-Subscribe: List-Unsubscribe:From; b=l6ivew+MHa1oW4qW6syc7I1NqA+XGg/435ruIrIv72NlYH2pUHHsy20ZD16XpmMLG HWF+1P3o8RR5uubFm2y4axP9aegfFDQkJ40ONWQ1WvEw7b6PHMWJE4Mumst7qa3yem uTDdXz8Br3LvrZtxbWoDV5t3cyO68frKrJNIG21NZMlHzl+x4odZXvK9PuYygcvUAA +dZ8hvO2iOj2Nf4mbfkIqXI7S9lt/4CKeSCS4lyYswBK1sqVHoiWZdaJGaviT+U4Mw n+Q/DBvx/sQ5CTy8lRyoAj0nEfcesCI2ev0d/HUxVsoVkFE4okvU67RJkbSFAZsIEM bm69H5QVksUSQ== Received: from mail.u-boot.org (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id EFFFF688E8 for ; Mon, 1 Dec 2025 10:09:55 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608992; bh=vcNCkr3Cm/jrpbDsC4UPNUhnn8kvdlDJvAt0nXq8+Sc=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=P9AXy49UDpkXyBJO7hG+gt4GF6UStntKMk3p5WIuNvPBnmt+1HiRe5OZdysdOxIH/ Nk5+Fo1LxMFur74Ccx7sFRZhqEpEErYNvx/wdSxpZlgDRi5Cg6FfUuF9hrIHz7pEUx F+VhHnhyjT1hKE5qeN6RdHwDTqgt3C0OQLvWUko34JXopF/erPFqz6u/CltXfKH8qC w9J+/txMr0TLh2C6y320JTB5yACtdw5QIyvuSJvCBjjszrGiaS2LjdrdJDgXH9IUIO uqXkHz8Ngzqx/omwuupuvzTSDEvUjqHpDIX6cvsf4yLnghPLF5qXHnOxE05hJww9IP CLu+tPIyGYv1g== Received: from localhost (localhost [127.0.0.1]) by mail.u-boot.org (Postfix) with ESMTP id E129A687F3; Mon, 1 Dec 2025 10:09:52 -0700 (MST) X-Virus-Scanned: Debian amavis at Received: from mail.u-boot.org ([127.0.0.1]) by localhost (mail.u-boot.org [127.0.0.1]) (amavis, port 10026) with ESMTP id qGuJVxFw9dIc; Mon, 1 Dec 2025 10:09:52 -0700 (MST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=u-boot.org; s=default; t=1764608991; bh=wtxpiO+45AkBi7a3o16NV5N2t8eVI8q/HKj9UYP+eBs=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=pDKFXoTUJ9rdq+FxRXfQv3WCZ+lmLhy7IxMeDDKz1OyaSPS+JLJ0ey346eTtTHQV/ z13kEXCE0+dWl0ThY1thun08Ks/Bb63kASLjCbfUEdGThEIwApDWd+18wGjVZFkylY XDstG8I4V3Z4MBcUipfwQ+uFk1AvrZe1hlf3SwqFcaWbEzrRcAV39H7Nb57XDMpwEA 5LLMlyIBuPNhYtPA9MwKnKJQam/v+88yNutN2bwXNrlgOPfwdB7LMGu/L8FkBoz5Gu PHlnX+Z9yQ+6GZE10FsYx/KYHzdWyMK35eWc6HPUg+PyGLClXRKqraIIfPR1oWrMNS nYPAHYQzepCbg== Received: from u-boot.org (unknown [73.34.74.121]) by mail.u-boot.org (Postfix) with ESMTPSA id 203986881A; Mon, 1 Dec 2025 10:09:51 -0700 (MST) From: Simon Glass To: U-Boot Concept Date: Mon, 1 Dec 2025 10:05:16 -0700 Message-ID: <20251201170529.3237986-38-sjg@u-boot.org> X-Mailer: git-send-email 2.43.0 In-Reply-To: <20251201170529.3237986-1-sjg@u-boot.org> References: <20251201170529.3237986-1-sjg@u-boot.org> MIME-Version: 1.0 Message-ID-Hash: ZOVAAFCDAPHM2Q7YLF655GZE3G776C2Y X-Message-ID-Hash: ZOVAAFCDAPHM2Q7YLF655GZE3G776C2Y X-MailFrom: sjg@u-boot.org X-Mailman-Rule-Misses: dmarc-mitigation; no-senders; approved; loop; banned-address; emergency; member-moderation; nonmember-moderation; administrivia; implicit-dest; max-recipients; max-size; news-moderation; no-subject; digests; suspicious-header CC: Heinrich Schuchardt , Simon Glass , Claude X-Mailman-Version: 3.3.10 Precedence: list Subject: [Concept] [PATCH 37/37] doc: Add malloc documentation List-Id: Discussion and patches related to U-Boot Concept Archived-At: List-Archive: List-Help: List-Owner: List-Post: List-Subscribe: List-Unsubscribe: From: Simon Glass Add doc/develop/malloc.rst documenting U-Boot's dynamic memory allocation implementation: - Overview of pre/post-relocation malloc phases - dlmalloc 2.8.6 version and features - Data structure sizes (~500 bytes vs 1032 bytes in 2.6.6) - Configuration options for code-size optimization - Debugging features (mcheck, valgrind, malloc testing) - API reference Also add an introductory comment to dlmalloc.c summarising the U-Boot configuration. Co-developed-by: Claude Signed-off-by: Simon Glass --- common/dlmalloc.c | 15 ++ doc/arch/sandbox/sandbox.rst | 2 + doc/develop/index.rst | 1 + doc/develop/malloc.rst | 333 +++++++++++++++++++++++++++++++++++ 4 files changed, 351 insertions(+) create mode 100644 doc/develop/malloc.rst diff --git a/common/dlmalloc.c b/common/dlmalloc.c index 54fd2e4a08a..c1c9d8a8938 100644 --- a/common/dlmalloc.c +++ b/common/dlmalloc.c @@ -1,4 +1,19 @@ // SPDX-License-Identifier: GPL-2.0+ +/* + * U-Boot Dynamic Memory Allocator + * + * This is Doug Lea's dlmalloc version 2.8.6, adapted for U-Boot. + * + * U-Boot Configuration: + * - Uses sbrk() via MORECORE (no mmap support) + * - Pre-relocation: redirects to malloc_simple.c + * - Post-relocation: full dlmalloc with heap from CONFIG_SYS_MALLOC_LEN + * - Sandbox keeps full features for testing; other boards use: + * INSECURE=1, NO_MALLINFO=1, NO_REALLOC_IN_PLACE=1 + * + * See doc/develop/malloc.rst for more information. + */ + /* Copyright 2023 Doug Lea diff --git a/doc/arch/sandbox/sandbox.rst b/doc/arch/sandbox/sandbox.rst index 9e9b027be8b..0d94c5a49cf 100644 --- a/doc/arch/sandbox/sandbox.rst +++ b/doc/arch/sandbox/sandbox.rst @@ -688,6 +688,8 @@ If sdl-config is on a different path from the default, set the SDL_CONFIG environment variable to the correct pathname before building U-Boot. +.. _sandbox_valgrind: + Using valgrind / memcheck ------------------------- diff --git a/doc/develop/index.rst b/doc/develop/index.rst index d325ad23897..c40ada5899f 100644 --- a/doc/develop/index.rst +++ b/doc/develop/index.rst @@ -51,6 +51,7 @@ Implementation global_data logging makefiles + malloc menus printf smbios diff --git a/doc/develop/malloc.rst b/doc/develop/malloc.rst new file mode 100644 index 00000000000..3c6b6ea65a4 --- /dev/null +++ b/doc/develop/malloc.rst @@ -0,0 +1,333 @@ +.. SPDX-License-Identifier: GPL-2.0-or-later + +Dynamic Memory Allocation +========================= + +U-Boot uses Doug Lea's malloc implementation (dlmalloc) for dynamic memory +allocation. This provides the standard C library functions malloc(), free(), +realloc(), calloc(), and memalign(). + +Overview +-------- + +U-Boot's malloc implementation has two phases: + +1. **Pre-relocation (simple malloc)**: Before U-Boot relocates itself to the + top of RAM, a simple malloc implementation is used. This allocates memory + from a small fixed-size pool and does not support free(). This is + controlled by CONFIG_SYS_MALLOC_F_LEN. + +2. **Post-relocation (full malloc)**: After relocation, the full dlmalloc + implementation is initialized with a larger heap. The heap size is + controlled by CONFIG_SYS_MALLOC_LEN. + +The transition between these phases is managed by the GD_FLG_FULL_MALLOC_INIT +flag in global_data. + +dlmalloc Version +---------------- + +U-Boot uses dlmalloc version 2.8.6 (updated from 2.6.6 in 2025), which +provides: + +- Efficient memory allocation with low fragmentation +- Small bins for allocations up to 256 bytes (32 bins) +- Tree bins for larger allocations (32 bins) +- Best-fit allocation strategy +- Boundary tags for coalescing free blocks + +Data Structures +--------------- + +The allocator uses two main static structures: + +**malloc_state** (~944 bytes on 64-bit systems): + +- ``smallbins``: 33 pairs of pointers for small allocations (528 bytes) +- ``treebins``: 32 tree root pointers for large allocations (256 bytes) +- ``top``: Pointer to the top chunk (wilderness) +- ``dvsize``, ``topsize``: Sizes of designated victim and top chunks +- Bookkeeping: footprint tracking, bitmaps, segment info + +**malloc_params** (48 bytes on 64-bit systems): + +- Page size, granularity, thresholds for mmap and trim + +For comparison, the older dlmalloc 2.6.6 used a single 2064-byte ``av_`` array +on 64-bit systems. The 2.8.6 version uses about half the static data while +providing better algorithms. + +Kconfig Options +--------------- + +Main U-Boot (post-relocation) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``CONFIG_SYS_MALLOC_LEN`` + Hex value defining the size of the main malloc pool after relocation. + This is the heap available for driver model, file systems, and general + dynamic memory allocation. Default: 0x400000 (4 MB), varies by platform. + +``CONFIG_SYS_MALLOC_F`` + Bool to enable malloc() pool before relocation. Required for driver model + and many boot features. Default: y if DM is enabled. + +``CONFIG_SYS_MALLOC_F_LEN`` + Hex value for the size of pre-relocation malloc pool. This small pool is + used before DRAM is initialized. Default: 0x2000 (8 KB), varies by platform. + +``CONFIG_SYS_MALLOC_CLEAR_ON_INIT`` + Bool to zero the malloc pool on initialization. This slows boot but ensures + malloc returns zeroed memory. Disable for faster boot when using large + heaps. Default: y + +``CONFIG_SYS_MALLOC_DEFAULT_TO_INIT`` + Bool to call malloc_init() when mem_malloc_init() is called. Used when + moving malloc from one memory region to another. Default: n + +``CONFIG_SYS_MALLOC_BOOTPARAMS`` + Bool to malloc a buffer for bi_boot_params instead of using a fixed + location. Default: n + +``CONFIG_VALGRIND`` + Bool to annotate malloc operations for Valgrind memory debugging. Only + useful when running sandbox builds under Valgrind. See + :ref:`sandbox_valgrind` for details. Default: n + +``CONFIG_SYS_MALLOC_SMALL`` + Bool to enable code-size optimisations for dlmalloc. This option combines + several optimisations: + + - Disables tree bins for allocations >= 256 bytes, using simple linked-list + bins instead. This changes large-allocation performance from O(log n) to + O(n) but saves ~1.5-2KB. + - Simplifies memalign() by removing fallback retry logic. Saves ~100-150 bytes. + - Disables in-place realloc optimisation. Saves ~200 bytes. + - Uses static malloc parameters instead of runtime-configurable ones. + - Converts small chunk macros to functions to reduce code duplication. + + These optimisations may increase fragmentation and reduce performance for + workloads with many large or aligned allocations, but are suitable for most + U-Boot use cases where code size is more important. Default: n + +``CONFIG_SYS_MALLOC_LEGACY`` + Bool to use the legacy dlmalloc 2.6.6 implementation instead of the modern + dlmalloc 2.8.6. The legacy allocator has smaller code size (~450 bytes less) + but uses more static data (~500 bytes more on 64-bit). Provided for + compatibility and testing. New boards should use the modern allocator. + Default: n + +xPL Boot Phases +~~~~~~~~~~~~~~~ + +The SPL (Secondary Program Loader), TPL (Tertiary Program Loader), and VPL +(Verification Program Loader) boot phases each have their own malloc +configuration options. These are prefixed with ``SPL_``, ``TPL_``, or ``VPL_`` +and typically mirror the main U-Boot options. + +Similar to U-Boot proper, xPL phases can use simple malloc (``malloc_simple``) +for pre-DRAM allocation. However, unlike U-Boot proper which transitions from +simple malloc to full dlmalloc after relocation, xPL phases that enable +``CONFIG_SPL_SYS_MALLOC_SIMPLE`` (or equivalent) cannot transition to full +malloc within that phase, since the dlmalloc code is not included in the +binary. + +Note: When building with ``CONFIG_XPL_BUILD``, the code uses +``CONFIG_IS_ENABLED()`` macros to automatically select the appropriate +phase-specific option (e.g., ``CONFIG_IS_ENABLED(SYS_MALLOC_F)`` expands to +``CONFIG_SPL_SYS_MALLOC_F`` when building SPL). + +SPL (Secondary Program Loader) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``CONFIG_SPL_SYS_MALLOC_F`` + Bool to enable malloc() pool in SPL before DRAM is initialized. Required + for driver model in SPL. Default: y if SPL_FRAMEWORK and SYS_MALLOC_F. + +``CONFIG_SPL_SYS_MALLOC_F_LEN`` + Hex value for SPL pre-DRAM malloc pool size. Default: inherits from + CONFIG_SYS_MALLOC_F_LEN. + +``CONFIG_SPL_SYS_MALLOC_SIMPLE`` + Bool to use only malloc_simple functions in SPL instead of full dlmalloc. + The simple allocator is smaller (saves ~600 bytes) but cannot free() + memory. Default: n + +``CONFIG_SPL_SYS_MALLOC`` + Bool to enable a full malloc pool in SPL after DRAM is initialized. + Used with CONFIG_SPL_CUSTOM_SYS_MALLOC_ADDR. Default: n + +``CONFIG_SPL_HAS_CUSTOM_MALLOC_START`` + Bool to use a custom address for SPL malloc pool instead of the default + location. Requires CONFIG_SPL_CUSTOM_SYS_MALLOC_ADDR. Default: n + +``CONFIG_SPL_CUSTOM_SYS_MALLOC_ADDR`` + Hex address for SPL malloc pool when using custom location. + +``CONFIG_SPL_SYS_MALLOC_SIZE`` + Hex value for SPL malloc pool size when using CONFIG_SPL_SYS_MALLOC. + Default: 0x100000 (1 MB). + +``CONFIG_SPL_SYS_MALLOC_CLEAR_ON_INIT`` + Bool to zero SPL malloc pool on initialization. Useful when malloc pool + is in a region that must be zeroed before first use. Default: inherits + from CONFIG_SYS_MALLOC_CLEAR_ON_INIT. + +``CONFIG_SPL_SYS_MALLOC_SMALL`` + Bool to enable code-size optimisations for dlmalloc in SPL. Enables the + same optimisations as CONFIG_SYS_MALLOC_SMALL (disables tree bins, + simplifies memalign, disables in-place realloc, uses static parameters, + converts small chunk macros to functions). SPL typically has predictable + memory usage where these optimisations have minimal impact, making the + code size savings worthwhile. Default: y + +``CONFIG_SPL_STACK_R_MALLOC_SIMPLE_LEN`` + Hex value for malloc_simple heap size after switching to DRAM stack in SPL. + Only used when CONFIG_SPL_STACK_R and CONFIG_SPL_SYS_MALLOC_SIMPLE are + enabled. Provides a larger heap than the initial SRAM pool. Default: + 0x100000 (1 MB). + +TPL (Tertiary Program Loader) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``CONFIG_TPL_SYS_MALLOC_F`` + Bool to enable malloc() pool in TPL. Default: y if TPL and SYS_MALLOC_F. + +``CONFIG_TPL_SYS_MALLOC_F_LEN`` + Hex value for TPL malloc pool size. Default: inherits from + CONFIG_SPL_SYS_MALLOC_F_LEN. + +``CONFIG_TPL_SYS_MALLOC_SIMPLE`` + Bool to use only malloc_simple in TPL instead of full dlmalloc. Saves + code size at the cost of no free() support. Default: n + +VPL (Verification Program Loader) +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +``CONFIG_VPL_SYS_MALLOC_F`` + Bool to enable malloc() pool in VPL. Default: y if VPL and SYS_MALLOC_F. + +``CONFIG_VPL_SYS_MALLOC_F_LEN`` + Hex value for VPL malloc pool size. Default: inherits from + CONFIG_SPL_SYS_MALLOC_F_LEN. + +``CONFIG_VPL_SYS_MALLOC_SIMPLE`` + Bool to use only malloc_simple in VPL. Default: y + +dlmalloc Compile-Time Options +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +These options are set in the U-Boot section of ``common/dlmalloc.c``: + +``NO_MALLOC_STATS`` + Disable malloc_stats() function. Default: 1 (disabled) + +``NO_MALLINFO`` + Disable mallinfo() function. Default: 1 for non-sandbox builds + +``INSECURE`` + Disable runtime heap validation checks. This reduces code size but removes + detection of heap corruption. Default: 1 for non-sandbox builds + +``NO_REALLOC_IN_PLACE`` + Disable in-place realloc optimisation. Enabled by CONFIG_SYS_MALLOC_SMALL. + Saves ~200 bytes of code. Default: 0 + +``NO_TREE_BINS`` + Disable tree bins for large allocations (>= 256 bytes), using simple + linked-list bins instead. Enabled by CONFIG_SYS_MALLOC_SMALL. Saves + ~1.5-2KB but changes performance from O(log n) to O(n). Default: 0 + +``SIMPLE_MEMALIGN`` + Simplify memalign() by removing fallback retry logic. Enabled by + CONFIG_SYS_MALLOC_SMALL. Saves ~100-150 bytes. Default: 0 + +``STATIC_MALLOC_PARAMS`` + Use static malloc parameters instead of runtime-configurable ones. + Enabled by CONFIG_SYS_MALLOC_SMALL. Default: 0 + +``SMALLCHUNKS_AS_FUNCS`` + Convert small chunk macros (insert_small_chunk, unlink_first_small_chunk) + to functions to reduce code duplication. Enabled by CONFIG_SYS_MALLOC_SMALL. + Default: 0 + +``SIMPLE_SYSALLOC`` + Use simplified sys_alloc() that only supports contiguous sbrk() extension. + Enabled automatically for non-sandbox builds. Saves code by removing mmap + and multi-segment support. Default: 1 for non-sandbox, 0 for sandbox + +``MORECORE_CONTIGUOUS`` + Assume sbrk() returns contiguous memory. Default: 1 + +``MORECORE_CANNOT_TRIM`` + Disable releasing memory back to the system. Default: 1 + +``HAVE_MMAP`` + Enable mmap() for large allocations. Default: 0 (U-Boot uses sbrk only) + +Code Size +--------- + +The dlmalloc 2.8.6 implementation is larger than the older 2.6.6 version due +to its more sophisticated algorithms. To minimise code size for +resource-constrained systems, U-Boot provides several optimisation levels: + +**Default optimisations** (always enabled for non-sandbox builds): + +- INSECURE=1 (saves ~1100 bytes) +- NO_MALLINFO=1 (saves ~200 bytes) +- SIMPLE_SYSALLOC=1 (saves code by simplifying sys_alloc) + +**CONFIG_SYS_MALLOC_SMALL** (additional optimisations, default y for SPL): + +- NO_TREE_BINS=1 (saves ~1.5-2KB) +- NO_REALLOC_IN_PLACE=1 (saves ~200 bytes) +- SIMPLE_MEMALIGN=1 (saves ~100-150 bytes) +- STATIC_MALLOC_PARAMS=1 +- SMALLCHUNKS_AS_FUNCS=1 (reduces code duplication) + +With default optimisations only, the code-size increase over dlmalloc 2.6.6 +is about 450 bytes, while data usage decreases by about 500 bytes. + +With CONFIG_SYS_MALLOC_SMALL enabled, significant additional code savings +are achieved, making it suitable for size-constrained SPL builds. + +Sandbox builds retain full functionality for testing, including mallinfo() +for memory-leak detection. + +Debugging +--------- + +For debugging heap issues, consider: + +1. **mcheck**: U-Boot includes mcheck support for detecting buffer overruns. + Enable CONFIG_MCHECK to use mcheck(), mcheck_pedantic(), and + mcheck_check_all(). + +2. **Valgrind**: When running sandbox with Valgrind, the allocator includes + annotations to help detect memory errors. See :ref:`sandbox_valgrind`. + +3. **malloc testing**: Unit tests can use malloc_enable_testing() to simulate + allocation failures. + +API Reference +------------- + +Standard C functions: + +- ``void *malloc(size_t size)`` - Allocate memory +- ``void free(void *ptr)`` - Free allocated memory +- ``void *realloc(void *ptr, size_t size)`` - Resize allocation +- ``void *calloc(size_t nmemb, size_t size)`` - Allocate zeroed memory +- ``void *memalign(size_t alignment, size_t size)`` - Aligned allocation + +Pre-relocation simple malloc (from malloc_simple.c): + +- ``void *malloc_simple(size_t size)`` - Simple bump allocator +- ``void *memalign_simple(size_t alignment, size_t size)`` - Aligned version + +See Also +-------- + +- :doc:`memory` - Memory management overview +- :doc:`global_data` - Global data and the GD_FLG_FULL_MALLOC_INIT flag